Project Details
Projekt Print View

End-to-End View-Dependent Compressed Rendering

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 528364066
 
Open world games, scientific simulations and other applications often involve huge scenes that are challenging to store, transmit and render. If the scene does not fit into GPU memory, the application needs to schedule data streaming and, optionally, decompression, so that out-of-core data is available in time for rendering. To make good use of the available memory, level of detail management is needed to keep memory footprint and rendering demands reasonable. These requirements are highly relevant to contemporary game technology, but still not considered solved. Despite the current interest in out-of-core rendering, we observe that they can easily be choked for two reasons. First, even with LOD, the visible part of the scene may consume too much memory bandwidth during rendering or may be too large to fit into memory. Forcefully reducing LOD is not a good solution, as it may affect visual quality. We would prefer compressed rendering, i.e., issuing shader workloads directly from an in-core compressed mesh representation. Second, compared to conventional (i.e., discrete) LOD, rendering with progressive or view-dependent LOD imposes a non-negligible runtime overhead, as it generally does not match the batch-oriented processing model of the GPU well. However, conventional LOD wastes a non-negligible amount of triangles if scenes are huge. Ideally, we would like to render view-dependent LOD with a low overhead comparable to non-view-dependent LOD. In this proposal, we present a compressed rendering infrastructure overcoming both limitations. Our rendering system will be structured into a front-end and a back-end. The back-end is responsible for fetching and providing the data, while the front-end is responsible for rendering the data. Unlike previous approaches, the front-end will be implemented directly as part of the graphics pipeline using mesh shaders, operating on primitive groups or "meshlets". (1) Our system will decompress geometry on the fly without noticeable extra load on the overall system. This capability makes it easy to design a streaming system, since the same compressed representation can be kept in memory and on non-volatile storage. To support decompression in the mesh shader, we propose a novel low-entropy format that lends itself to fully parallel processing. (2) Our system will render view-dependent level of detail (LOD) with marginal overhead. In contrast to previous approaches, extracting a different LOD happens on the fly in the mesh shader. Ahead-of-time analysis of meshlet boundaries ensures that we obtain crack-free continuous LOD with negligible runtime overhead.
DFG Programme Research Grants
International Connection Austria
Cooperation Partner Professor Dr. Markus Steinberger
 
 

Additional Information

Textvergrößerung und Kontrastanpassung