Direkt zum Inhalt springen
Intelligent Autonomous Systems
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich



Romeo: Body, hand and object tracking in everyday activities

The Romeo framework is a markerless visually tracker framework for articulated and rigid models with special application to tracking the human hand and several objects in complex manipulation activities. We are using image streams from three high definition cameras which are segmented using color histograms and additionally used to build a voxel representation of the scene context. The detailed representation of the human hand in complex articulations and object manipulation scenarios involves a high-dimensional state space of 32 and more degrees of freedom. Additional 75+ parameters arise when adapting the hand CAD model to the human instructor's hand in an initialization step. To be able to find accurate tracking solutions, we use particle filter based tracking algorithms and local optimization techniques. Image evaluation is boosted by using the latest GPGPU acceleration framework OpenCL along with multi-processor CPU evaluation.

Raw camera input:

raw-kochen-b-04-00160-a-small.jpg raw-kochen-b-04-00160-b-small.jpg raw-kochen-b-04-00160-c-small.jpg

Images after segmentation:

segmented-kochen-b-04-00160-a-small.png segmented-kochen-b-04-00160-b-small.png segmented-kochen-b-04-00160-c-small.png

Tracking results:





This project is partly funded by the DFG as part of the MeMoMan project.

Rechte Seite