Project Details
Projekt Print View

Mechanisms of spatial attention and integration along the three dimensions of auditory space

Subject Area Acoustics
Human Cognitive and Systems Neuroscience
Term since 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 444873001
 
Our auditory system monitors and integrates acoustic and non-acoustic cues to enable orientation in complex multimodal environments and to adapt to changes in the environment. Depending on the spatial axes (azimuth, elevation, distance) and modalities (auditory, visual, haptic) from which they originate, these cues vary greatly in their representation in the brain, the time during which they become available to the auditory system, their spatial and temporal resolution and accuracy, and their role in the deployment of attention. To study these effects, we propose to use multimodal virtual environments to render scenes of varying complexity that provide cues either in isolation or in combination with cues from other spatial axes and perceptual modalities. In these synthetic environments, we will measure spatial unmasking, localization and segregation in speech-in-speech situations, which have mostly been studied in the horizontal plane only, and investigate how the movements of the target or masker along the spatial axes affect these effects. We will identify decodable neural correlates of the direction of attentional focus and the integration of spatial and temporal cues from EEG recordings, and determine the extent to which visual input and head movements support the ability to deploy attention and enhance speech communication through specially designed multimodal complex virtual environments incorporating moving audiovisual talkers, effects of visual capture, and haptic feedback. We will achieve this through a series of psychoacoustic experiments that assess attentional deployment and cognitive load by analysing EEG recordings, spatial release from masking, and listening effort. Because of its relevance to everyday life, we will focus on speech in speech scenarios. We expect that the collaboration of two groups with partly overlapping and partly complementary expertise in acoustics and neuroscience will prove as fruitful in the proposed project as it did in the first phase project. Again, the binaural stimuli will be generated by the Berlin group and EEG experiments based on these stimuli will be conducted by the Leipzig group. The psychoacoustic experiments are to be conducted in collaboration between the two groups in a classical laboratory environment and in specially designed interactive virtual environments. Furthermore, in collaboration with two working groups in Oldenburg, we have dedicated a work package to the development of tools that have been identified within AUDICTIVE as valuable for the whole community, such as the development of a standardised tool for rendering and exchanging spatial auditory scenes and stimuli.
DFG Programme Priority Programmes
 
 

Additional Information

Textvergrößerung und Kontrastanpassung