DFG Project IRCON
Image-based Representation of Clothes for Realistic Virtual Try-On
Period: 01. August 2011 - 31. July 2014
This project is funded by the German Science Foundation, DFG EI524/2-1.
This project addresses the issue of realistic visualization of garments in real-time augmented reality environments, where virtual clothes are rendered into real video material. From a scientific point of view this scenario presents numerous challenges. The most crucial target is the realistic visualization of the virtual clothes rendered onto a moving human body while the system runs in real-time. Existing approaches focus on high-quality cloth analysis, simulation, and rendering but are far from working in real-time. Others provide a real-time visualization, but are quite limited in visual quality and plausibility. In this research project we aim at developing novel solutions that enable real-time cloth visualization with very high rendering quality. This can be achieved with new representations of clothes that combine a rough 3D model with image-based rendering techniques. The geometric information will be used for low-resolution shape adaptation while small details (e.g. fine wrinkles), as well as complex shading/reflection properties will be accounted for through a clever use of numerous images captured in an offline process. The images contain information on shading, texture distortion and silhouette at fine wrinkles. These representations will be estimated in advance from real samples of cloth captured in different poses, using sophisticated image analysis methods, thus shifting computational complexity into the training phase. For rendering, pose dependant geometry and appearance are interpolated and merged from the stored representations. This procedure will greatly reduce the computational complexity associated with interactive visualization and enable a high-end visual rendering in real-time. By separating the surface albedo (i.e. its underlying local color) from shading and estimating spatial texture distortion at small wrinkles and creases, retexturing, i.e. changing the appearance of the piece of cloth, will be possible. The use of real images leads to very natural looking results while the geometric model will allow animation of the final representation.
Publications
A. Hilsmann: "Image-based Approaches for Photo-Realistic Rendering of Complex Objects", Dissertation, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, March 2014.
[pdf]
A. Hilsmann, P. Fechteler and P. Eisert: "Pose Space Image Based Rendering", Computer Graphics Forum 32(2) (Proc. of Eurographics 2013), pp. 265-274, May 2013.
[pdf] [bibitem]
Our approach synthesizes images of clothes from a database of images by interpolating image warps as well as intensities in pose space.
Abstract
This paper introduces a new image-based rendering approach for articulated objects with complex pose-dependent appearance, such as clothes. Our approach combines body-pose-dependent appearance and geometry to synthesize images of new poses from a database of examples. A geometric model allows animation and view interpolation, while small details as well as complex shading and reflection properties are modeled by pose-dependent appearance examples in a database. Correspondences between the images are represented as mesh-based warps, both in the spatial and intensity domain. For rendering, these warps are interpolated in pose space, i.e. the space of body poses, using scattered data interpolation methods. Warp estimation as well as geometry reconstruction is performed in an offline procedure, thus shifting computational complexity to an a-priori training phase.
Videos
(Click the images to watch the video on YouTube.)
A. Hilsmann and P. Eisert: "Image-based Animation of Clothes", Eurographics 2012 Short Paper, Cagliari, Italy, May 2012.
[pdf] [bibitem]
Left: Base model representation consisting of a multi-view/multi-pose image set, silhouette information, viewdependent geometry and 3D pose information. Center: Additional fine-scale mesh warps are estimated between neighboring views as well as neighboring poses on top of the geometry-guided warps to achieve view-/pose consistency. Right: Appearance is separated into texture albedo, shading and texture distortion to allow retexturing.
Abstract
We propose a pose-dependent image-based rendering approach for the visualization of clothes with very high rendering quality. Our representation combines body-pose-dependent geometry and appearance. A geometric model accounts for low-resolution shape adaptation, e.g. animation and view interpolation, while small details as well as complex shading/reflection properties are accounted for through numerous images. Information on shading, texture distortion and silhouette at fine wrinkles are extracted from the images to allow later texture replacement. The image-based representations are estimated in advance from real samples of clothes captured in an offline process, thus shifting computational complexity into the training phase. For rendering, pose dependent geometry and appearance are interpolated and merged from the stored representations.