Situated Resolution and Generation of Spatial Referring Expressions for Robotic Assistants

Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09), 2009
In this paper we present an approach to the task of generating and resolving referring expressions (REs) for conversational mobile robots. It is based on a spatial knowledge base encompassing both robot- and human-centric representations. Existing algorithms for the generation of referring expressions (GRE) try to find a description that uniquely identifies the referent with respect to other entities that are in the current context. Mobile robots, however, act in large-scale space, that is environments that are larger than what can be perceived at a glance, e.g. an office building with different floors, each containing several rooms and objects. One challenge when referring to elsewhere is thus to include enough information so that the interlocutors can extend their context appropriately. We address this challenge with a method for context construction that can be used for both generating and resolving REs Ð two previously disjoint aspects. Our approach is embedded in a bi-directional framework for natural language processing for robots.

Embedding

<a href="http://prints.vicos.si/publications/239">Situated Resolution and Generation of Spatial Referring Expressions for Robotic Assistants</a>