Curiosity-driven acquisition of sensorimotor concepts using memory-based active learning

Robotics and Biomimetics, 2008. ROBIO 2008. IEEE International Conference on, 2009
Operating in real-world environments, a robot will need to continuously learn from its experience to update and extend its knowledge. The paper focuses on the specific problem of how a robot can efficiently select information that is ldquointerestingrdquo, driving the robot's ldquocuriosityrdquo. The paper investigates the hypothesis that curiosity can be emulated through a combination of active learning, and reinforcement learning using intrinsic and extrinsic rewards. Intrinsic rewards quantify learning progress, providing a measure for ldquointerestingnessrdquo of observations, and extrinsic rewards direct learning using the robot's interactions with the environment and other agents. The paper describes the approach, and experimental results obtained in simulated environments. The results indicate that both intrinsic and extrinsic rewards improve learning progress, measured in the number of training cycles to achieve a goal. The approach presented here extends previous approaches to curiosity-driven learning, by including both intrinsic and extrinsic rewards, and by considering more complex sensorimotor input.

Embedding

<a href="http://prints.vicos.si/publications/176">Curiosity-driven acquisition of sensorimotor concepts using memory-based active learning</a>