Three-dimensional (3D) graphics are commonplace in many applications such as digital entertainment, cultural heritage, architecture and scientific simulation. These data are increasingly rich and detailed; as a complex 3D scene may contain millions of geometric primitives, enriched with various appearance attributes such as texture maps designed to produce a realistic material appearance, as well as animation data.
number of VR/MR applications consider 3D data stored on remote servers, strong latency problems may be encountered, caused by the streaming of the scene on the display device.
The objective of this proposal is to devise novel algorithms and tools allowing interactive visualization, in these constrained contexts (Virtual and Mixed reality, with local/remote 3D content), with a very high quality of user experience. As 3D scenes are visualized through a certain viewport, we seek to optimize the display in this viewport by proposing (1) Tools for the generation and compression of high quality levels of details, (2) Visual quality metrics capable of predicting the quality of these levels of detail and driving their generation, (3) Visual attention models capable of predicting where the observer is looking and thus selecting and filtering the primitives and levels of detail. A distinctive property of the project lies into the fact that we will consider rich 3D data, including not only geometric information but also animation and complex physically based materials represented by texture maps (color, metalness, roughness, normals).
The proposed tools will solve both the transmission latency problems encountered in the case of remote 3D content and the rendering constraints present in virtual reality and mixed reality. We plan to implement two prototypes: a virtual reality prototype on HTC Vive device and a mixed reality prototype on the Hololens device from Microsoft.