Appearance and Movement

Jump to: navigation, search

Back to home


The creation of 3D virtual worlds finds applications in various fields, including technical (3D simulation, virtual prototyping, 3D catalogs and archiving, etc.) as well as artistic areas (e.g. creation of 3D content for entertainment or educational purposes). The target size and level of details of virtual worlds have tremendously increased during the past few years, making it possible to produce highly realistic synthetic images. An enhanced visual realism is obtained by the definition of detailed 3D models, including shape and movement, in conjunction with sophisticated appearances. Advances in technologies for shape and movement acquisition based on 3D scanners have lead to considerable improvements both in terms of quality and amount of produced 3D models. However there are still some major challenges to address in this field, related to the huge size of the models and the transfer of acquired movements from one model to another one.

The appearance of 3D models is commonly defined by set of "textures" that consist in 2D images mapped onto the surfaces of 3D models, just as wallpaper. Textures enrich 3D surfaces with small scale details: wood grain, bricks for a façade or grass blades. With the increasing definition of display devices, the creation of very high definition textures, i.e. with resolution beyond hundreds of Mega-pixels, has become a necessity. However creating texture of this size with standard per-pixel editing tools is a tedious task for computer graphics artists, which increases production costs. The usage of acquisition techniques is also limited because the conditions required to perform the acquisition of the real materials are very restrictive: controlled lighting, object accessibility, etc.

Objectives / Challenges

Our first challenge is to devise novel methods for the analysis and segmentation of 3D acquisition data, including 3D scans and motion capture data. Our goal is to offer tools that simplify the creation of 3D models, possibly with movement, as well as the construction of statistical atlases. Our second challenge is to develop methods to produce textures "by example", which means that the input data are texture samples extracted from images. To reach this goal, image analysis tools are needed to classify and extract textures at different scales. We also want to represent the materials of real objects with textures generated from a set of photographs taken with very few constraints. Our goal is to map these textures onto the surfaces of arbitrary 3D models and under arbitrary lighting conditions for realistic image synthesis applications.

Permanent participants

  • One Professor: Jean-Michel Dischler
  • One Research scientist ("Chargée de recherche"): Hyewon Seo
  • Five Associate Professors ("Maîtres de Conférences"): Rémi Allègre, Karim Chibout, Frédéric Cordier, Arash Habibi, Basile Sauvage
  • Three Research engineers: Frédéric Larue (2011-), Olivier Génevaux (2011-2015), Sylvain Thery (2015-)
  • 5 Doctoral candidates (PhD students): Geoffrey Guingo (fixed-term contract starting Oct. 2015 ERC Marie-Paule CANI et contrat Allegorithmic), Guoliang Luo (Project SHARED contract from Oct. 2011 to Dec. 2014 (PhD thesis defended on Nov. 4th, 2014)), Vasyl Mykhalchuk (Project SHARED contract from Nov. 2011 to Apr. 2015 (PhD thesis defended on April 9th, 2015)), Alexandre Ribard (UNISTRA doctoral contract from Nov. 2015 to Apr. 2016), Kenneth Vanhoey (UNISTRA doctoral contract from Oct. 2010 to Sept. 2013 (PhD thesis defended on Feb. 18th, 2014)).


Shape analysis, registration, and segmentation of movement data

With the recent advances in imaging technologies we now have a growing accessibility to capture the shape and motion of people or organs either by using optical motion capture system or using medical imaging scanners. Taking a step beyond the existing methods that use static shape information for shape analysis [2-MCS13], we have developed novel shape analysis methods that allow exploiting motion information [2-SKCC13]. Our developed techniques are the first who formulate the problems of segmentation [4-LSC14, 4-LLS15], feature extraction [4-MSC14, 2-MSC15], similarity computation [4-LCS14, 8-Luog14, 2-LCS16], and correspondence computation [8-Mykh15] on time-varying surface data.

[2-MSC15] Dynamic feature points detected by our AniM-DoG framework are illustrated on a number of selected frames of animated meshes. The color of a sphere represents the temporal scale (from blue to red) of the feature point, and the radius of the sphere indicates the spatial scale.
Given a pair of animated meshes exhibiting semantically similar motion, we compute a sparse set of feature points on each mesh and compute spatial correspondences among them so that points with similar motion behavior are put in correspondence.
Appearance reconstruction

2D parametric color functions are widely used in image-based rendering and image relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data (photographs) is promising but difficult. Acquiring data in a dense and uniform fashion is not always possible, while a simpler acquisition process results in sparse, scattered and noisy data. We have developed a method for reconstructing radiance functions for digitized objects from photographs [2-VSGL13]. This method makes it possible to visualize digitized objects in their original lighting environments. We have also developed a method for simplifying meshes with attached radiance functions [2-VSKL15]. Our method makes it possible to produce very compact models of digitized objects with both geometry and appearance.

[2-VSGL13] Starting from a set of photos obtained from hand-held shooting, a virtual representation of the appearance of an object is reconstructed. This appearance especially encodes specular effects.
[2-VSKL15] 3D virtual objects with their appearances are simplified: the goal is to reduce their size while minimizing the loss of visual quality.
Texture modeling and synthesis

Textures are crucial to enhance the visual realism of virtual worlds. In order to alleviate the burden of work for artists who have to design gigantic virtual worlds, we have developed methods to automatically generate high-resolution textures from a single input exemplar, with control over pattern preservation, as well as over content randomness and variety in the resulting texture [2-GDG12, 2-GDG12a, 2-VSLD13, 2-GSVD14]. Moreover, in the context of a collaboration with Yale University, we investigated texture analysis [2-LSAD16] so as to provide better control over the content of the synthesized textures. Among these papers, three in particular [2-VSLD13, 2-GSVD14, 2-LSAD16] have been published in ACM Transactions on Graphics (IF: 3.725), which is the most famous journal in computer graphics. In recongnition of his contributions, Jean-Michel Dischler was invited to the Dagstuhl Seminar on Real-World Visual Computing (Leibniz-Zentrum für Informatik) in Oct. 2013.

[2-VSLD13] Some textures are synthesized on-the-fly on the GPU from texture samples at multiple scales.
[2-GSVD14] Some textures are synthesized on-the-fly on the GPU, based on a spectral analysis.
[2-LSAD16] Some multi-scale label-maps are obtained with our texture analysis method. A possible application is interactive texture editing.


As stated before, advances in acquisition technologies for shape, motion and appearance have lead to considerable improvements both in terms of quality and amount of produced 3D models. But one of the main future challenges for Computer Graphics applications is to be able to better exploit these data for producing: 1) better quality 3D models with less user work and 2) more controllable contents (not only digital copies). Improvements in acquisition technology are not sufficient: the core future research is to be able to tailor 3D data processing technologies so as to improve 3D content production out of these data. In this context, specialized analysis tools as well as their objective evaluations are still lacking. Our future works address this core challenge.

In the field of motion capture, we aim at providing 1) a validation of computational approaches for salience extraction, 2) a prediction of eye-movement, and 3) an investigation of efficient methods for building ground truth for spatio-temporal features. Based on this, we will concentrate on developing a statistical model (or atlas) based on compact representations of highly redundant 4D data. In the field of texturing, our future goal is to enable visualization in arbitrary lighting environments, which requires more elaborate analysis tools to classify the intrinsic properties of the materials. Beyond improving quality, the analysis of input data will furthermore allow us to produce more controllable contents, such as spatially varying appearances depending on the location in the texture, and also the location on the surface of a 3D model.