Self-Supervised Cross-Modal Online Learning of Basic Object Affordances for Developmental Robotic Systems

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2010
For a developmental robotic system to function successfully in the real world, it is important that it be able to form its own internal representations of affordance classes based on observable regularities in sensory data. Usually successful classifiers are built using labeled training data, but it is not always realistic to assume that labels are available in a developmental robotics setting. There does, however, exist an advantage in this setting that can help circumvent the absence of labels: co-occurrence of correlated data across separate sensory modalities over time. The main contribution of this paper is an online classifier training algorithm based on Kohonenâ?s learning vector quantization (LVQ) that, by taking advantage of this co- occurrence information, does not require labels during training, either dynamically generated or otherwise. We evaluate the algorithm in experiments involving a robotic arm that interacts with various household objects on a table surface where camera systems extract features for two separate visual modalities. It is shown to improve its ability to classify the affordances of novel objects over time, coming close to the performance of equivalent fully-supervised algorithms.

Embedding

<a href="http://prints.vicos.si/publications/172">Self-Supervised Cross-Modal Online Learning of Basic Object Affordances for Developmental Robotic Systems</a>