Gaze Allocation During Visually Guided Manipulation

2011
In this work we present principled methods for the coordination of a robot's oculomotor system with the rest of its body motor systems. The problem is to decide which physical actions to perform next and where the robot's gaze should be directed in order to gain information that is relevant to the success of its physical actions. Previous work on this problem has shown that a reward-based coordination mechanism provides an efficient solution. However, that approach does not allow the robot to move its gaze to different parts of the scene, it considers the robot to have only one motor system, and assumes that the actions have the same duration. The main contributions of our work are to extend that previous reward-based approach by making decisions about where to fixate the robot's gaze, handling multiple motor systems, and handling actions of variable duration. We compare our approach against two common baselines, random and round robin gaze allocation. We show how our method provides a more effective strategy to allocate gaze where is needed the most.

Embedding

<a href="http://prints.vicos.si/publications/122">Gaze Allocation During Visually Guided Manipulation</a>