Semantic Object Maps for Robotic Housework - Representation, Acquisition and Use
In this article we investigate the representation and acquisition of semantic objects maps (SOMs) that can serve as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments. These maps provide the robot with information about its operation environment that enable it to perform fetch and place tasks more efficiently and reliably. To this end, the semantic object maps can answer queries such as the following ones: “What do parts of the kitchen look like?”, “How can a container be opened and closed?”, “Where do objects of daily use belong?”, “What is inside of cupboards/drawers?”, etc. The semantic object maps presented in this article, which we call SOM+, extend the first generation of SOMs in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of objects. Also, the acquisition methods for SOM+s substantially advance the first generation in that SOM+s are acquired autonomously and with low-cost (Kinect) instead of very accurate (laser-based) 3D sensors. In addition, perception methods are more general and are demonstrated to work in different kitchen environments. Paper is under submission and available on demand. | semantic-map-representation.png expert_interface3.png |
Acquiring Semantic Maps for Household Tasks
In this paper, we extend our previous semantic mapping methods with means to acquire models that extend the information content of semantic maps such that they can answer the following categories of queries: “What do parts of the kitchen look like?”, “How can a container be opened and closed?”, “Where do objects of daily use belong?”, “What is inside of cupboards/drawers?”, etc. These are the kinds of information that are required for fetch and delivery applications in the household domain or factory domain respectively. Besides the information content of the environment models, the research presented in this paper also advances the mechanisms for acquiring such semantic maps substantially. Instead of acquiring the maps with a more accurate but slower tilting laser scanner, we use the inexpensive but more limited Kinect RGBD sensor that allows for much faster environment model acquisition and enables the acquisition of visual environment representations. We also generalized the perception methods, including handle detection and recognition, such that they are not specific to particular environments. Paper is under submission and available on demand. [ video ] | kitchen1_poisson.png kitchen1_blended.png |
Autonomous Semantic Mapping for Robots
Performing Everyday Manipulation Tasks in Kitchen Environments
In this work we report about our efforts to equip service robots with the capability to acquire 3D semantic maps. The robot autonomously explores indoor environments through the calculation of next best view poses, from which it assembles point clouds containing spatial and registered visual information. We apply various segmentation methods in order to generate initial hypotheses for furniture drawers and doors. The acquisition of the final semantic map makes use of the robot’s proprioceptive capabilities and is carried out through the robot’s interaction with the environment. We evaluated the proposed integrated approach in the real kitchen in our laboratory by measuring the quality of the generated map in terms of the map’s applicability for the task at hand (e.g. resolving counter candidates by our knowledge processing system). [ pdf ] [ video ] | ias_kitchen_registered.png all_segmented_no_handles_on_black.png |