ExRealis

Jump to: navigation, search




ExRealis - A digitization framework

In many sectors, including industry, one can notice an increasing demand of digital content, either for the enrichment of virtual worlds, or for simulations, or for prototyping before deployment, for instance. Nowadays, many off-the-shelf 3D scanners are available, whose metrological accuracy now makes it possible the reliable measurement of real objects and the capture of faithful digital imprints. This partially leads to turn the digital content production task, so far exclusively allotted to computer graphics artists, into a technical act of data gathering, often cheaper.

Despite the improvement of shape measurement technologies, the step to go from measured data to digital content directly usable by computer graphics applications still requires many intermediate processings. Moreover, for several years, the IGG team applies itself to implement its skills in the art for the cultural heritage, a field where the sole shape measurement is not enough. Indeed, capturing the appearance is then of major importance too, since only this information gives the possibility to visualize the produced digital surrogates by simulating in a faithful manner the behaviour of materials they are made of with respect to viewing and lighting conditions. But acquiring such an information requires in turn additionnal processings.

The goal of ExRealis is to provide a unified framework that answers in an efficient manner to all of these needs. It offers several tools, devices and softwares that give the possibility to produce digital content of various nature from real data, with the purpose of supporting research works conducted by the IGG team around themes like realistic rendering, texture synthesis, virtual reality and 3D animation, as well as enabling service provisions in digitization with our partners. ExRealis proposes:

  • a range of devices and software tools covering the whole digitization processing pipeline, from data acquisition to the creation of textured 3D models;
  • some advanced methods for texture reconstruction and visualization accounting for complex lighting environments or material characteristics;
  • a motion capture system for the animation of avatars in virtual reality or biomecanic applications.



Our devices

The ExRealis framework contains several devices, detailed below, that enable the measurement of different kinds of information (geometry, colour, movement) from real scenes or real objects.


Short range scanner

This optical device based on structured light allows the 3D scanning of objects whose size ranges from 20cm to 1m. Depth measurement ranges from 80 to 120cm and the size of the produced range images is of 1280x960 pixels, which leads to an average resolution of 600 to 700 points per square cm.
The controling software has entirely been developped by our team, which thus offers a high degree of flexibility and the possibility of a complete reconfiguration, so as to adjust, for instance, the device resolution to specific needs.


Medium range scanner

The time of flight laser scanner ScanStation2 from Leica Geosystems allows depth measurement ranging from 0.2 to 300m, at a speed rate peaking to 50 000 points/sec. Moreover, a set of targets provided by the manufacturer enables the registration of point clouds acquired from different locations.
Please see the technical documentation from Leica Geosystems for more details.


Photographic equipment

Our team also has at its disposal a semi-professional photographic equipment made of a pair of Canon EOS 5D MkII bodies, as well as two sets of 24mm, 50mm et 135mm lenses.
This equipement is mainly dedicated to the acquisition of object appearance (colour, texture), but the duplication of every device piece also makes it possible, if needed, to capture pairs of stereoscopic images.


Motion capture system

This optical system, made of 12 Vicon T40 and T40S infrared cameras (4 Mpixels, 370 Hz), allows the motion capture of objects or subjects by the measurement of infrared markers trajectories in space. These trajectories may then be used, for instance, for skeleton animation of 3D characters.



Our software tools

The aforementioned devices provide only raw data. In order to produce usable digital content, these data must be processed by our software. With this in mind, we manage a continuous development process in order to integrate every tool which is useful to the creation of realistic digital models from physical real objects. These developments, based on out-of-core data structures and mechanisms for big data management, allows geometry processing (3D meshes reconstruction from digitized point clouds) as well as appearance processing (color / texture synthesis over meshes from sets of pictures).


Geometry acquisition and processing

After a 3D scanning session (namely the data gathering by itself), many processings have to be applied to the raw data produced by the scanner before obtaining a usable 3D model. These processings generally include, among other things, the following steps:

  • the cleaning, that aims at removing possible measurement errors (outliers, digitization noise, etc.);
  • the geometry registration, that performs the alignment of point clouds recovered by the scanner from different viewpoints with respect to each other;
  • the integration, thanks to which a meshed surface is obtained from measured 3D points;
  • the decimation, that provides different versions of the recovered mesh, at multiple resolutions, depending on the targeted application.

PipelineGeo EN.png

Our software covers every step of this processing pipeline, thanks either to fully automatic algorithmic solutions, or to a suitable user interface whenever user intervention is needed, like in the case of some of the cleaning tasks, for instance. We are then in position to produce, from real objects, 3D models that are correct in terms of geometry and topology, at different level of details, and that can be exported to many file formats compatible with off-the-shelf CAD softwares.


Appearance acquisition and processing

Acquiring the appearance of a real object starts from a photographic campaign, so as to measure its photometric properties. Reconstructing a texture from these pictures then requires to solve the following problems:

  • the mesh corresponding to the object, recovered thanks to a prior 3D scanning session, must be first unfolded in texture space thanks to a parameterization step, which defines the layout to be used for the storage of the appearance information to be reconstructed;
  • pictures then must be registered with respect to the object geometry, thanks to an estimation of their poses and optical parameters. These parameters allows the reprojection of the image contents onto the 3D mesh;
  • finally, chromatic samples coming from the pictures have to be processed and combined in order to produce a texture that is free from the visual artefacts commonly due to chromatic aberrations or to inaccuracies in the picture-to-geometry registration.

PipelineAppearance EN.png

Produced textures may consist either in a simple colour information, or in more advanced representation models that give not only a clue on the object's hue, but also on other properties of its material, like shininess, for instance.

The IGG team has worked on the fitting and the real-time rendering of such models, considering especially light fields. Light fields capture the appearance of an object within a given lighting environment: the one corresponding to the acquisition moment. This includes, among other things, all illumination effets related to the observer's displacement around the object (specular peaks, inter-reflections, etc.) and allows afterwards the free inspection of the digital copy with the same lighting conditions, while maintaining a high degree or realism with respect to more basic texture models.

All of these software bricks, needed for appearance acquisition, have been integrated to ExRealis, making it a powerful tool for the reconstruction of textured 3D meshes from real data.

Comparison between colour texture and light field. Reflections improve realism and offer a better understanding on the nature of the materials the object is made of.




Our skills

After 10 years working on 3D + appearance digitization, the IGG team has acquired a good expertise of the field and a savoir-faire on how to apply it to the practical cases of digitization campaigns. Within the framework of various collaborations, we have had many occasions to apply this savoir-faire through service provisions like those presented below.


Œuvre Notre Dame foundation

Within the framework of the Eveil3D project, and in partnership with the technology transfer center Holo3 and the Œuvre Notre Dame foundation, the IGG team has performed in october 2013 the digitization of two statues of the cathedral of Strasbourg. The goal was to use these statues in an immersive 3D environment for a language learning serious game. About 50cm high, they've been digitized with our short range structured light scanner. A photographic campaign has also been done so as to produce colour textures for both models.

Bear statue, cathedral of Strasbourg. From left: picture, 3D model reconstructed after geometry acquisition (35M points), textured 3D model reconstructed after appearance acquisition (55 pictures).


Bull statue, cathedral of Strasbourg. From left: picture, 3D model reconstructed after geometry acquisition (22M points), textured 3D model reconstructed after appearance acquisition (28 pictures).



Inter-university House of Human Sciences (MISHA), Strasbourg

MISHA has at its disposal a huge collection of ethnographic artefacts, and has launched some years ago a project based on the OpenSIM game engine that provides a virtual world to make it possible the study and the contextualization of these artefacts. Obviously, this requires to have digital copies of them, and so to have digitized them beforehand.

For this purpose, they contacted us in autumn 2013. They needed to produce digital copies of about ten african ethnographic objects from the Dogon tribe. Their geometry has been digitized with the short range structured light scanner, and their colour thanks to a photographic campaign.

Digital copies of three Dogon masks. From left: adone mask (antelope), kanaga mask, bird mask with a dege at the top of it.


Digital copy of a Hogon cup. Left-hand side: the cup fully assembled; right-hand side: each single piece presented separately.


Digital copies of two dege (female figurines).



Bois l'Abbé fortifications

Below are presented some renderings of a digitization we made of a part of the Bois l'Abbé fortifications (48°12'16.3"N 6°24'00.8"E), located at Uxegney near Epinal, France. The final model is made of about 63M points acquired with the Leica Scanstation 2 scanner from 20 different locations. Colour assigned to the points comes from the internal scanner camera, dedicated to data handling conveniency but not to a faithful appearance acquisition.


Gypsothèque, university of Strasbourg

We had the occasion to get in touch with the Gypsothèque of the Strasbourg university (plaster copy museum of cultural artefacts) to perform the digitization of one of their statues.

Digitized model of a statue of Aphrodite, Gypsothèque of Strasbourg.



Some digitizations made at home

Here are some models digitized in our lab in order to produce data sets for illustrating the research works of our team. Some of these 3D models are provided with light field textures, that enable to simulate illumination variations related to the observer's displacement for more realism, as explained before. It has to be noted that basic colour textures can also be exported using standard image file formats.



Collaborations

Several national and regional projects are supporting this framework:

  • Projet Interreg EVEIL3D,
  • ANR ATROCO,
  • Ministère RIAM AMI3D,
  • Région Pôle Image.



Contacts

If you have needs either for service provisions or for technology transfer, or if you are just interested by one of the models presented in this page, please send an email to Frédéric Larue, research engineer in charge of the ExRealis framework, at: flarue AT unistra.fr