Robust Localization using an Omnidirectional Appearance-based Subspace Model of Environment
Robotics and Autonomous Systems, 2003
Appearance-based visual learning and recognition techniques that are based on models derived from a training set of 2D images are being widely used in computer vision applications. In robotics, they have received most attention in visual servoing and navigation. In this paper we discuss a framework for visual self-localization of mobile robots using a parametric model built from panoramic snapshots of the environment. In particular, we propose solutions to the problems related to robustness against occlusions and invariance to the rotation of the sensor. Our principal contribution is an ``eigenspace of spinning-images'', i.e., a model of the environment which successfully exploits some of the specific properties of panoramic images in order to efficiently calculate the optimal subspace in terms of principal components analysis (PCA) of a set of training snapshots without actually decomposing the covariance matrix. By integrating a robust recover-and-select algorithm for the computation of image parameters we achieve reliable localization even in the case when the input images are partly occluded or noisy. In this way, the robot is capable of localizing itself in realistic environments.