Project Details
Projekt Print View

Individual Binaural Synthesis of Virtual Acoustic Scenes

Subject Area Acoustics
Term from 2018 to 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 402811912
 
Final Report Year 2024

Final Report Abstract

In recent years, virtual reality has found its way into many areas. The focus of the major technical developments has traditionally been layed on the visual representation. However, a convincing auditory representation is also necessary for a high-quality virtual reality. Various methods are available for headphone-based generation of acoustic virtual reality. In model-based binaural synthesis, Head-Related Transfer Functions (HRTFs) are used to auralize individual virtual sources. Data-based binaural synthesis uses microphone arrays together with audio signal processing and HRTFs. Generic HRTFs from dummy heads can lead to problems with in-head localization, front-back confusion and sound coloration in both approaches. There are numerous methods for fast measurement of individual HRTFs, as well as algorithms for individualizing generic HRTFs. For the former, however, there was a lack of systematic validation of the measurement uncertainties due to the measuring apparatus and the influence of the subject's movement. The same applied to the validation of algorithms for the individualization of generic HRTFs. The effects of measurement uncertainties and approximations in the algorithms of model and data-based binaural synthesis on the authenticity and plausibility of acoustic scenes were the research subject of this project. A key question here was to what extent an improvement of a presented virtual acoustic scene can be achieved with the help of individualized or individually measured HRTFs compared to generic HRTFs. In addition to instrumental measures, empirical studies in the form of listening experiments were used for evaluation.

Publications

 
 

Additional Information

Textvergrößerung und Kontrastanpassung