Towards Scene Understanding - Object Segmentation Using RGBD-Images
Proceedings of the 2012 Computer Vision Winter Workshop (CVWW), 2012
We present a framework for detecting un- known 3D objects in RGBD-images and extracting representations suitable for robotics tasks such as grasping. We address cluttered scenes with stacked and jumbled objects where simplistic plane pop-out methods are not sufficient. We start by estimat- ing surface patches using a mixture of planes and NURBS (non-uniform rational B-splines) fitted to the 3D point cloud and employ model selection to find the best representation for the given data. We then construct a graph from surface patches and relations between patches and perform graph cut to arrive at object hypotheses segmented from the scene. The en- ergy terms for patch relations are learned from user annotated training data, where we train a support vector machine (SVM) to classify a relation as being indicative of two patches belonging to the same ob- ject given a vector of relation features, such as prox- imity or color similarity. We show preliminary results demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.