My research aims at recognizing facial expressions and the emotional state of human from camera images in order to apply this information on human-computer interaction. Traditional human-computer interaction via mouse and keyboard is often considered slow, non-intuitive and requires time-consuming manual reading or even specific training. This is mainly, because information on communication channels that are more natural to human beings, such as facial expressions, are not available to computers.
Therefore, recent research aims at enabling machines to utilize communication channels natural to human beings, such as gesture or facial expressions. Humans interpret emotion from video and audio information and heavily rely on this information during every-day communication. Therefore, knowledge about human behavior, intention, and emotion is necessary to construct convenient human-machine interaction mechanisms.
The recognition of facial expression is conductes in several steps:
A preprocessing step often includes the segmentation of the face from the background. Skin color has been proven useful for this task. We developed an algorithm that adapts on image properties and therefore performs robust even in heavy light changes. The video below shows the original image data, the adaptive approach and a static approach that does not adapt to the specific image content.
This information is used to fit a face model onto the image. Models represent knowledge about real-world objects, such as position, shape or texture via a model parameter and therefore provide an abstraction for the interpretation task. In case of facial expression recognition, the model reflects face related information like the position and rotation in 3D space, the rising level of the eyebrows or the opening degree of the mouth. In a sequence of images the model parameters have to be updated for every image in order to reflect the image content. For visualization, the video below shows a simple 3D tracking of a human face.
Finally, in order to extract higher level information, the model parameters have to match the image content. The computational challenge of determining these model parameters is called model fitting and is tackled with a fitness measurement function between the image content and the model parameterization. These functions are either minimized to determine a good model fit or calculate a parameter update directly. The process of model fitting is visualized in the video below. In the beginning, the model parameters do not match the image content. However, by changing the model parameters the model fitness is increased. The fitness function is visualized by the color of the model, changing from red to green.
|Implicit Coordination with Shared Belief: A Heterogeneous Robot Soccer Team Case Study , In Advanced Robotics, the International Journal of the Robotics Society of Japan, 2010. [bib]|
|Adjusted Pixel Features for Facial Component Classification , In Image and Vision Computing Journal, 2009.() [bib]|
|Recognizing Facial Expressions Using Model-based Image Interpretation , In Advances in Human-Computer Interaction (Shane Pinder, ed.), I-Tech, volume 1, 2008. [bib]|
|Learning Displacement Experts from Multi-band Images for Face Model Fitting , In International Conference on Advances in Computer-Human Interaction, 2011. [bib]|
|Improving Aspects of Empathy and Subjective Performance for HRI through Mirroring Facial Expressions , In Proceedings of the 19th IEEE International Symposium on Robot and Human Interactive Communication, 2011. [bib]|
|Real-Time Face and Gesture Analysis for Human-Robot Interaction , In Real-Time Image and Video Processing 2010, volume , 2010.(invited paper) [bib]|
|Mirror my emotions! Combining facial expression analysis and synthesis on a robot , In The Thirty Sixth Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB2010), volume , 2010.() [bib]|
|Towards robotic facial mimicry: system development and evaluation , In Proceedings of the 19th IEEE International Symposium on Robot and Human Interactive Communication, volume , 2010.() [bib]|
|A Model Based approach for Expression Invariant Face Recognition , In 3rd International Conference on Biometrics, Alghero Italy, Springer, volume , 2009.() [bib] [pdf]|
|Multi-Feature Fusion in Advanced Robotics Applications , In Internaional Conference on Frontier of Information Technology, ACM, volume , 2009.() [bib] [pdf]|
|Facial Expressions Recognition from Image Sequences , In 2nd International Conference on Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, Prague, Czech Republic, Springer, volume , 2009.() [bib] [pdf]|
|Model Based Analysis of Face Images for Facial Feature Extraction , In Computer Analysis of Images and Patterns, Munster, Germany, Springer, volume , 2009.() [bib] [pdf]|
|3D Model for Face Recognition across Facial Expressions , In Biometric ID Management and Multimodal Communication, Madrid, Spain, Springer, volume , 2009.() [bib]|
|Facial Expression Recognition with 3D Deformable Models , In Proceedings of the 2nd International Conference on Advancements Computer-Human Interaction (ACHI), Springer, volume , 2009.(Best Paper Award) [bib]|
|Did I Get it Right: Head Gesture Analysis for Human-Machine Interaction , In Human-Computer Interaction. Novel Interaction Methods and Techniques, Springer, volume , 2009.() [bib]|
|Tailoring Model-based Techniques for Facial Expression Interpretation , In The First International Conference on Advances in Computer-Human Interaction (ACHI08), 2008. [bib] [pdf]|
|Robustly Classifying Facial Components Using a Set of Adjusted Pixel Features , In Proc. of the International Conference on Face and Gesture Recognition (FGR08), 2008. [bib] [pdf]|
|Model Based Face Recognition Across Facial Expressions , In Journal of Information and Communication Technology, 2008. [bib] [pdf]|
|A Real Time System for Model-based Interpretation of the Dynamics of Facial Expressions , In Proc. of the International Conference on Automatic Face and Gesture Recognition (FGR08), 2008. [bib] [pdf]|
|SIPBILD -- Mimik- und Gestikerkennung in der Mensch-Maschine-Schnittstelle , In Beiträge der 37. Jahrestagung der Gesellschaft für Informatik (GI), volume 1, 2007. [bib] [pdf]|
|Coordination without Negotiation in Teams of Heterogeneous Robots , In Proceedings of the RoboCup Symposium, 2006. [bib] [pdf]|
|Robustly Estimating the Color of Facial Components Using a Set of Adjusted Pixel Features , In 14. Workshop Farbbildverarbeitung, 2008. [bib]|
|Recognizing Facial Expressions Using Model-based Image Interpretation , In Verbal and Nonverbal Communication Behaviours, COST Action 2102 International Workshop, 2008. [bib]|
|Face Model Fitting based on Machine Learning from Multi-band Images of Facial Components , In Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment, held in conjunction with CVPR, 2008. [bib] [pdf]|
|Are You Happy with Your First Name? , In Proceedings of the 3\textsuperscriptrd Workshop on Emotion and Computing: Current Research and Future Impact, 2008. [bib]|
|Interpreting the Dynamics of Facial Expressions in Real Time Using Model-based Techniques , In Proceedings of the 3\textsuperscriptrd Workshop on Emotion and Computing: Current Research and Future Impact, 2008. [bib]|
|Estimating Natural Activity by Fitting 3D Models via Learned Objective Functions , In Workshop on Vision, Modeling, and Visualization (VMV), volume 1, 2007. [bib] [pdf]|
|Multi Joint Action in CoTeSys --- Setup and Challenges , Technical report, CoTeSys Cluster of Excelence: Technische Universität München & Ludwig-Maximilians-Universität München, 2010. [bib]|