User Tools

Site Tools


research:ameva

Am-Eva: Automated Models of Everyday Activities

Automated probabilistic models of everyday activities (AM-EvA) are a novel technical means for the perception, interpretation, and analysis of everyday manipulation tasks and activities of daily life. They are detailed, comprehensive models describing human actions at various levels of abstraction from raw poses and trajectories to motions, actions and activities and thereby integrate several kinds of action-models in a common, knowledge-based framework to combine observations of human activities with a-priori knowledge about actions. AM-EvAs enable robots and technical systems to analyze actions in the complete situation and activity context. They make the classification and assessment of actions and situations objective and can justify the probabilistic interpretation with respect to the activities the concepts have been learned from. AM-EvAs allow to analyze and compare the way humans perform actions which can help with autonomy assessment and diagnosis.

Segmentation of Human Motions in Everyday Manipulation Tasks

We are investigating methods for segmenting sequences of human motion into meaningful classes like “Reaching” or “Grasping”. The input data is provided by the MeMoMan full-body pose tracker.

Our approach is based on Conditional Random Fields (CRF) which perform the segmentation combining pose-related features (if hands are extending, beyond a threshold, etc) with information from a sensor network and the environment model (object picked up, cupboard opened). Have a look at the TUM Kitchen Data Set which provides multi-modal observations of human everyday activities.

Hierarchical Action Models

In order to perform abstract reasoning over observed action sequences, their ordering and parameters, one has to abstract from sequences of motion segments into higher-level action classes.

The segmentation and classification step produces a sequence of motion segments which are represented as instances of the respective motion classes in a knowledge base. They are combined with additional observations (which object was manipulated, where was it taken from,…), and abstract action specifications are matched against this sequence (stating e.g. that a transport action includes picking up an object, moving to another place and putting it down).

For comparing actions and for learning the structure of activities we are using statistical relational learning methods, in particular Bayesian Logic Networks as researched in the ProbCog project.

Publications

Journal Articles and Book Chapters

Towards Automated Models of Activities of Daily Life (Michael Beetz, Moritz Tenorth, Dominik Jain, Jan Bandouch), In Technology and Disability, IOS Press, volume 22, 2010. [bib] [pdf]

Conference Papers

Towards Automated Models of Activities of Daily Life (Michael Beetz, Jan Bandouch, Dominik Jain, Moritz Tenorth), In First International Symposium on Quality of Life Technology -- Intelligent Systems for Better Living, 2009. [bib] [pdf]

Workshop Papers

The TUM Kitchen Data Set of Everyday Manipulation Activities for Motion Tracking and Action Recognition (Moritz Tenorth, Jan Bandouch, Michael Beetz), In IEEE International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences (THEMIS), in conjunction with ICCV2009, 2009. [bib] [pdf]
Powered by bibtexbrowser
Export as PDF or BIB

research/ameva.txt · Last modified: 2011/07/26 17:11 by tenorth