The CogMan project (1) develops computational and control models of pick-and-place tasks in the context of everyday manipulation activities in human environments, (2) implements the model into a control system for the kitchen scenario, and (3) empirically analyzes the impact of this control model on the flexibility, robustness, adaptability, and naturality of the robot behavior.
A very impressive aspect of human manipulation of objects in their everyday environments is that they select and parametrize their reaching and grasping movements very skillfully based on the properties of the objects they are to pick up, on the situational circumstances, and on the end positions of the objects. Actions with two collaborators are also adjusted accordingly. The results are very smooth, predictable, and efficient compound movements. Not only do people perform these actions skillfully but they are also capable of learning these skills from little experience and adapt them automatically when needed. Robots that are to serve as robotic assistants and work together with humans need a similar level of manipulation skills and equally powerful means for skill acquisition and adaptation.
Based on experiments analyzing human manipulation tasks from the cognitive psychology point of view, we develop a model of how humans select and parametrize grasps, reaching movements, standing positions, and destinations based on the task context, the object, and the goals of the manipulation activity. Using these information pieces we learn behavior models from the observed data and interpret the findings in the light of cognitive control model.
Using mechanical models of objects, a system can predict what is going to happen to an object when a force is applied to it. A robot can use this knowledge to learn how to exert forces to the object to obtain an specific desired goal. Finding a mapping between this object control and a symbolic representation for it can lead to a language that can be used for robot control.
The novel aspect of the CogMan research approach is that we perform comprehensive experiments of human everyday manipulation activities taking the context into account. Therefore, we use more comprehensive sensor data including estimation of fullbody pose, hand pose, visual attention, various biosignals, model and pose of the manipulated object, force used for lifting, kind of grasp, grasp points, local scene around the object, activity context, etc.
When humans perform even very simple reaching movements, they obey a number of constraints for not colliding, maximizing robustness in the face of uncertainties in perception and actuation. At the same time, the freedoms, that are available in human redundant arms and the task itself are exploited to reduce jerk, execution time and control effort. This optimization takes into account “motor noise” and leads to very high variations in human movements, even under very constrained lab conditions.
One goal of CogMan is to extract these constraints from human motions and represent them in a geometrical manner. Additional constraint- or freedom-'directions' can be taught directly by moving the robot arm, to complete this set of constraints. They are represented using the iTaSC framework, which can combine these contraints into an instantaneous motion controller. It allows us to start from engineered motion controllers and incrementally add constraints and freedoms, and we can check continuously the effectiveness of every added constraint.
This project is partly funded by CoTeSys.
Imitating human reaching motions using physically inspired optimization principles. Sebastian Albrecht, Karinne Ramirez Amaro, Federico Ruiz-Ugalde, David Weikersdorfer, Marion Leibold, Michael Ulbrich, Michael Beetz. 11th IEEE-RAS International Conference on Humanoid Robots. 2011.
Generality and Legibility in Mobile Manipulation , In Autonomous Robots Journal (Special Issue on Mobile Manipulation), volume 28, 2010. [bib] [pdf] |
The RoboEarth language: Representing and Exchanging Knowledge about Actions, Objects, and Environments , In IEEE International Conference on Robotics and Automation (ICRA), 2012.(Best Cognitive Robotics Paper Award.) [bib] [pdf] |
Improving robot manipulation through fingertip perception , In IEEE International Conference on Intelligent Robots and Systems (IROS), 2012. [bib] [pdf] |
Movement-aware Action Control -- Integrating Symbolic and Control-theoretic Action Execution , In IEEE International Conference on Robotics and Automation (ICRA), 2012. [bib] [pdf] |
A Generalized Framework for Opening Doors and Drawers in Kitchen Environments , In IEEE International Conference on Robotics and Automation (ICRA), 2012. [bib] [pdf] |
Multimodal Autonomous Tool Analyses and Appropriate Application , In 11th IEEE-RAS International Conference on Humanoid Robots, 2011. [bib] [pdf] |
Fast adaptation for effect-aware pushing , In 11th IEEE-RAS International Conference on Humanoid Robots, 2011. [bib] [pdf] |
Robotic Roommates Making Pancakes , In 11th IEEE-RAS International Conference on Humanoid Robots, 2011. [bib] [pdf] |
Prediction of action outcomes using an object model , In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010. [bib] [pdf] |
Robotic grasping of unmodeled objects using time-of-flight range data and finger torque information , In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010. [bib] [pdf] |
Learning and Performing Place-based Mobile Manipulation , In Proceedings of the 8th International Conference on Development and Learning (ICDL)., 2009. [bib] [pdf] |
Compact Models of Motor Primitive Variations for Predictable Reaching and Obstacle Avoidance , In 9th IEEE-RAS International Conference on Humanoid Robots, 2009. [bib] [pdf] |
Compact Models of Human Reaching Motions for Robotic Control in Everyday Manipulation Tasks , In Proceedings of the 8th International Conference on Development and Learning (ICDL)., 2009. [bib] [pdf] |
Combining Analysis, Imitation, and Experience-based Learning to Acquire a Concept of Reachability , In 9th IEEE-RAS International Conference on Humanoid Robots, 2009. [bib] [pdf] |
Action-Related Place-Based Mobile Manipulation , In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), 2009. [bib] [pdf] |
Transformational Planning for Mobile Manipulation based on Action-related Places , In Proceedings of the International Conference on Advanced Robotics (ICAR)., 2009. [bib] [pdf] |
How humans optimize their interaction with the environment: The impact of action context on human perception. , In Progress in Robotics. Proceedings of the FIRA RoboWorld Congress, 2009. [bib] [pdf] |
Obstacle avoidance in a pick-and-place task , In Proceedings of the 2009 IEEE Conference on Robotics and Biomimetics, 2009. [bib] |
Subsequent Actions Influence Motor Control Parameters of a Current Grasping Action , In IEEE 17th International Symposium on Robot and Human Interactive Communication (RO-MAN), Muenchen, Germany, 2008. [bib] |
The Assistive Kitchen -- A Demonstration Scenario for Cognitive Technical Systems , In IEEE 17th International Symposium on Robot and Human Interactive Communication (RO-MAN), Muenchen, Germany, 2008.(Invited paper.) [bib] [pdf] |
Robotic Roommates Making Pancakes - Look Into Perception-Manipulation Loop , In IEEE International Conference on Robotics and Automation (ICRA), Workshop on Mobile Manipulation: Integrating Perception and Manipulation, 2011. [bib] [pdf] |
The Assistive Kitchen --- A Demonstration Scenario for Cognitive Technical Systems , In Proceedings of the 4th COE Workshop on Human Adaptive Mechatronics (HAM), 2007. [bib] [pdf] |