Online Autotuning for Interactive Raytracing
Final Report Abstract
Two previously separate areas, raytracing and autotuning, have been closely connected and synergistically developed further. Raytracing is an established and widely used technique and the basis for photorealistic rendering. The goal of autotuning is to automatically find optimal parameters, configurations, or combinations of algorithmic building blocks. The run-time critical rendering techniques based on raytracing exhibit a large, in its entirety hardly manageable number of tuning possibilities, ranging from building and traversing acceleration structures to numerical light transport simulation. The optimal operating point, however, may shift over time, for example due to changes in dynamic 3D-scenes. The goal was twofold. First, we developed new capa- bilities for autotuning, in particular the capability of dynamic autotuning. Second, we investigated how these can be put into position to find settings close to the optimum and continuously adapt them for raytracing. For this we developed a hybrid tuning-technique which combines classical empirical op- timization with model-based prediction. The disadvantage of pure search-based methods is that they explore many sub-optimal configurations and that the update of parameters in our setting can be very costly (e.g. if acceleration structures need to be rebuilt). Therefore, predicting good starting parameters is crucial and has thus been studied intensively. Firstly, we investigated which indicators (aggregated information) describe the behaviour of rendering methods and specifics of a 3D-scene compactly, yet sufficiently to obtain effcient domain-specific prediction models. These models have been carefully evaluated, refined, and the extrapolation, i.e. the application to non-learned 3D scenes, has been explored. Secondly, nominal parameters, such as those required for selecting algorithmic building blocks, have been challenging for the optimization: The majority of search al- gorithms used in autotuning relies on metrics for distances or directions, which are not defined for nominal parameters. For this, we developed a hierarchical search algorithm, which partitions the search space according to the nominal parameters and dependencies of the tuning parameters leading to significantly reduced search times. Lastly, we researched how online-autotuning can be applied inside Monte Carlo-methods for light transport simulation (opposed to treating these as black boxes) in order to com- pute more converged images in a given time. Primarily we studied this for a method which explores the space of light transport paths using a Markov chain. By controlling its state changes with autotuning we were able to increase the performance and show that automatic parameter optimization can be beneficial across many levels in raytracing-based rendering methods.
Publications
- Online-Autotuning in the Presence of Algorithmic Choice. In: IEEE International Parallel and Distributed Processing Symposium Workshops. 2017, pp. 1379-1388
Philip Pfaffe, Martin Tillmann, Sigmar Walter, and Walter F. Tichy
(See online at https://doi.org/10.1109/IPDPSW.2017.28) - Efficient Hierarchical Online-Autotuning: A Case Study on Polyhedral Accelerator Mapping. In: Proceedings of the ACM International Conference on Supercomputing. 2019
Philip Pfaffe, Tobias Grosser, and Martin Tillmann
(See online at https://doi.org/10.1145/3330345.3330377) - Hybrid Online Autotuning for Parallel Ray Tracing. In: Eurographics Symposium on Parallel Graphics and Visualization. 2019
Killian Herveau, Philip Pfaffe, Martin Tillmann, Walter F. Tichy, and Carsten Dachsbacher
(See online at https://doi.org/10.2312/pgv.20191110)