HumActO: Tracking of human manipulation activities
humacto.jpeg The HumActO framework is a markerless visually tracker framework for articulated and rigid models with special application to tracking the human hand and several objects in complex manipulation activities. We are using image streams from three high definition cameras which are segmented using color histograms and additionally used to build a voxel representation of the scene context. The detailed representation of the human hand in complex articulations and object manipulation scenarios involves a high-dimensional state space of 32 and more degrees of freedom. Additional 75+ parameters arise when adapting the hand CAD model to the human instructor's hand in an initialization step. To be able to find accurate tracking solutions, we use particle filter based tracking algorithms and local optimization techniques. Image evaluation is boosted by using the latest GPGPU acceleration framework OpenCL along with multi-processor CPU evaluation.
Videos
Hand picks egg ( MPEG2, OGG)
Sample images
Raw camera input
raw-kochen-b-04-00160-a-small.jpg raw-kochen-b-04-00160-b-small.jpg raw-kochen-b-04-00160-c-small.jpg
Images after segmentation
segmented-kochen-b-04-00160-a-small.png segmented-kochen-b-04-00160-b-small.png segmented-kochen-b-04-00160-c-small.png
Tracking results a0203-teaser.jpg
Acknowledgements
This project is partly funded by the DFG as part of the MeMoMan project.