Predicting the Unobservable, Visual 3D Tracking with a Probabilistic Motion Model

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA11), 2011
Visual tracking of an object can provide a powerful source of feedback information during complex robotic manipulation operations, especially those in which there may be uncertainty about which new object pose may result from a planned manipulative action. At the same time, robotic manipulation can provide a challenging environment for visual tracking, with occlusions of the object by other objects or by the robot itself, and sudden changes in object pose that may be accompanied by motion blur. Recursive filtering techniques use motion models for predictor-corrector tracking, but the simple models typically used often fail to adequately predict the complex motions of manipulated objects. We show how statistical machine learning techniques can be used to train sophisticated motion predictors, which incorporate additional information by being conditioned on the planned manipulative action being executed. We then show how these learned predictors can be used to propagate the particles of a particle filter from one predictor-corrector step to the next, enabling a visual tracking algorithm to maintain plausible hypotheses about the location of an object, even during severe occlusion and other difficult conditions. We demonstrate the approach in the context of robotic push manipulation, where a 5-axis robot arm equipped with a rigid finger applies a series of pushes to an object, while it is tracked by a vision algorithm using a single camera.

Embedding

<a href="http://prints.vicos.si/publications/119">Predicting the Unobservable, Visual 3D Tracking with a Probabilistic Motion Model</a>