Project Details
Projekt Print View

Audiovisual Perception of Emotion in Hearing Persons and Cochlear Implant Users

Subject Area General, Cognitive and Mathematical Psychology
Applied Linguistics, Computational Linguistics
Term since 2025
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 568082155
 
Auditory spoken language comprehension is often considered the benchmark for cochlear implant (CI) success, yet this perspective underestimates the crucial role of visual cues in communication. Socio-emotional signals are highly relevant and play an essential role in quality of life for CI users (Luo, Kern, & Pulling, 2018; Schorr, Roth, & Fox, 2009), highlighting the need for research on visual benefits to communication. According to models of communication via the face and voice (Young, Frühholz, & Schweinberger, 2020), visual stimuli can activate auditory cortex areas, and deafness can enhance cross-modal cortical plasticity. Even after adaptation to a CI, evidence suggests a particularly strong contribution of visual information to the perception of speech and speaker gender. In the first ViCom funding phase, we substantially demonstrated such processes for vocal emotion recognition. To advance this research, this project aims at gaining a deeper understanding of these phenomena while also developing effective interventions that enhance communication and, ultimately, quality of life. Focusing on post-lingually deaf adult CI users and comparing them with biologically hearing controls, we will conduct five studies (S1-S5). In S1, we will investigate how temporal synchrony influences audiovisual (AV) emotion recognition. In S2, we will assess the relative contributions of facial versus bodily cues to AV emotion perception. In S3, building on promising findings with auditory caricaturing and perceptual training, we will develop and evaluate AV training aimed at improving emotion recognition in CI users. In S4, we will explore the effect of emotion intensity on recognition accuracy. In S5, we will address real-world listening challenges by examining how multi-talker background noise affects AV emotion perception. Across all studies, we will investigate the relationship between emotion recognition abilities and self-reported quality of life. This project builds on successful prior DFG-funded research, including the first ViCom funding phase, studies on Voice Perception (Schw 511/10-1, -2) and research on Audiovisual integration in the identification of speaker and speech (Schw 511/6-1, -2). It also benefits from our longstanding collaboration with Jena University Hospital and the Cochlear Implant Rehabilitation Centre in Thuringia. By deepening our understanding of the cognitive mechanisms underlying multimodal perception, this project will provide critical insights into how visual signals support CI users. These findings will contribute to optimizing both linguistic and socio-emotional communication, ultimately enhancing the quality of life for individuals with CIs.
DFG Programme Priority Programmes
 
 

Additional Information

Textvergrößerung und Kontrastanpassung