Project Details
Projekt Print View

Examining attention, memory performance, and listening effort (exAMPLE): Understanding listeners' cognitive performances in complex audiovisual communication settings with embodied conversational agents

Subject Area Acoustics
General, Cognitive and Mathematical Psychology
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 444724862
 
Face-to-face communication is the most prevalent form of verbal information exchange. However, in complex and dynamic environments such as open-plan offices and busy bars, listeners of multi-talker conversations encounter unique challenges due to the acoustically complex, stimulus-rich, and dynamic nature of these multisensory settings. The effects of complex audiovisual presentations on comprehension and memory of multi-talker conversations, as well as the influence of note-taking while listening, remain largely unexplored. This research proposal aims to investigate audiovisual variations in realistic listening settings in Virtual Reality (VR) and assess their impact on listener attention, speech intelligibility, listening comprehension, as well as memory for conversational content and other talker-related aspects (e.g., who said what). Building upon our predecessor project, this study advances from simplified to intricate audiovisual immersive environments that are rich and vibrant in both auditory and visual stimuli. The scope of cognitive performance assessment expands from understanding and recalling conversational facts to encompass guided note-taking and the retrieval of additional conversational aspects, including attention allocation and guidance. To achieve this, we will develop immersive and interactive audiovisual virtual environments in which participants engage in conversations with two or more talkers, represented as embodied conversational agents. Subsequently, participants will be tested on their recall of conversational content and other conversation-related details. We will extend and adapt the Heard Text Recall (HTR) paradigm from our initial project to include note-taking (NT) and questions about talker-related (TR) aspects, finally resulting in the HTR-NT and HTR-TR tasks, respectively. By exploring the effects of audiovisual VR environment characteristics on the aforementioned cognitive performances, this research will provide guidelines for creating realistic and immersive verbal social interactions, thus facilitating ongoing VR-based cognitive research.
DFG Programme Priority Programmes
 
 

Additional Information

Textvergrößerung und Kontrastanpassung