Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Visual Computing

BMBF Project 3DGIM2

3D Facial Analysis for Identification and Human Computer Interaction

 

Period: 01. October 2017 - 30. September 2019

 

This project is funded by the BMBF, 03ZZ0468

The 3Dsensation consortium aims at fundamentally redefining human-machine interaction. This includes also the automatic 3D analysis of human faces and facial expressions which is useful in many tasks such as access-control, creation of new dialog systems for human-computer-interaction of even medical therapy. Based on the results of a previous project (3DGIM), we are working on hybrid representations of the human face that allow for more realistic rendering and animation. This projects focuses on facial regions that are ignored by most other models. Problematic regions, like eyes, mouth and lips are usually not represented in face models although they have a huge impact on the quality of still and moving images that are generated from the facial model. While in 3DGIM, the face was treated as one deforming object, we are now trying to extend the 'one-model-fits-all' approach by implementing specialized deformation and rendering models for complex regions like eyes mouth and lips to allow for more realistic renderings and animations. Another extension of this project will be a hybrid animation pipeline, which uses dynamic textures and geometry to model the deformation of a human face. Through this extension, we hope to overcome typical limitations of pure geometric animation. In the second half of this project, we will focus on temporal deformation models. The reason for this is that not only the 'static' geometry-quality affects the realism of a face model, but also the dynamics. Pure linear interpolation between different blendshapes usually results in unsatisfactory animations of the facial geometry, unless highly complex or manually rigged models are used. A texture based animation can partially circumvent these problems, by capturing and eventually replaying the actual video input. But also texture based animation requires being able to interpolate between different facial expressions, for example to concatenate two or more captured sequences. To solve this problem, we will use recent machine learning techniques to analyze captured data and extract more realistic deformation models.

bmbf