Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Visual Computing

Modeling and Capturing 3D Shape and Motion of Humans

Photorealistic modeling of human characters in computer graphics is of special interest because it is required an many areas, for example in modern movie- and computer game productions. Modeling realistic human models is relatively simple with current modeling software, but modeling an existing real person in detail is still a very cumbersome task.

Our research is focused on realistic and automatic modeling as well as tracking human body motion. A skinning based approach is chosen to support efficient realistic animation. For increased realism, an artifact-free skinning function is enhanced to support blending the influence of multiple kinematic joints. As a result, natural appearance is supported for a wide range of complex motions. To setup a subject-specific model, an automatic and data-driven optimization framework is introduced. Registered, watertight example meshes of different poses are used as input. Using an efficient loop, all components of the animatable model are optimized to closely resemble the training data: vertices, kinematic joints and skinning weights.

The following figure shows an overview of model learning pipeline: registered input example scans (left), initial model with binary skinning weights corresponding to rigid kinematics (middle), optimized model with skinning weights, rest pose vertices and kinematic skeleton compliant to training data (right).

 

kinematic-human-modeling1.png

 

For the purpose of tracking sequences of noisy, partial 3D observations, a markerless motion capture method with simultaneous detailed model adaptation is proposed. The non-parametric formulation supports free-form deformation of the model’s shape as well as unconstrained adaptation of the kinematic joints, thereby allowing to extract individual peculiarities of the captured subject. Integrated a-prior knowledge on human shape and pose, extracted from training data, ensures that the adapted models maintain a natural and realistic appearance. The result is an animatable model adapted to the captured subject as well as a sequence of animation parameters, faithfully resembling the input data.

The following figure shows an overview of the model adaptive motion capture processing steps: 1 of 12 input RGB camera views (left), frontal, noisy multi-view 3D reconstruction (red), kinematic control skeleton (stick figure), reconstructed animatable model (cyan, right).

kinematic-human-modeling2.png

 

Altogether, these approaches provide realistic and automatic modeling of human characters accurately resembling sequences of 3D input data. Going one step further, these techniques can be used to directly re-animate recorded 3D sequences by propagating vertex motion changes back to the captured data. The following figure shows the captured/original frame of scan sequence (1st image on the left) and animation results on that frame for gaze correction (all remaining images on the right).

kinematic-human-modeling3.png

 

Publications

P. Fechteler, Multi-View Motion Capture based on Model Adaptation, Dissertation, Humboldt University Berlin, Nov. 2019.

P. Fechteler, A. Hilsmann, P. Eisert Markerless Multiview Motion Capture with 3D Shape Model Adaptation, Computer Graphics Forum, 38(6), pp. 91-109, March 2019. [PDF]

P. Fechteler, L. Kausch, A. Hilsmann, P. Eisert, Animatable 3D Model Generation from 2D Monocular Visual Data, Proc. IEEE Int. Conf. on Image Processing (ICIP), Athens, Greece, Oct. 2018.

P. Fechteler, W. Paier, A. Hilsmann, P. Eisert Real-time Avatar Animation with Dynamic Face Texturing, Proc. IEEE Int. Conf. on Image Processing (ICIP), Phoenix, Arizona, USA, Sep. 2016.

P. Fechteler, A. Hilsmann, P. Eisert
Example-based Body Model Optimization and Skinning, Proc. Eurographics, short paper, Lisbon, Portugal, May 2016. [PDF]

P. Fechteler, W. Paier, P. Eisert:
Articulated 3D Model Tracking with on-the-fly Texturing, Proc. International Conference on Image Processing (ICIP), Paris, France, Oct. 2014. [PDF]

P. Fechteler, A. Hilsmann, P. Eisert:
Kinematic ICP for Articulated Template Fitting, Proc. International Workshop on Vision, Modeling and Visualization (VMV), Magdeburg, Germany, November 2012. [PDF]

P. Fechteler, P. Eisert:
Recovering Articulated Pose of 3D Point Clouds, Proceedings of the 8th European Conference on Visual Media Production (CVMP), London, UK, 16th - 17th November 2011. [PDF]