Detailseite
Projekt Druckansicht

Die Integration von Form und Bewegung bei der Gesichtserkennung

Antragstellerin Dr. Katharina Dobs
Fachliche Zuordnung Kognitive und systemische Humanneurowissenschaften
Allgemeine, Kognitive und Mathematische Psychologie
Förderung Förderung von 2015 bis 2017
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 280741132
 
Erstellungsjahr 2017

Zusammenfassung der Projektergebnisse

In complex and dynamic environments, the integration of multiple sensory cues arising from the same object is essential for accurate perception. The optimal strategy is to weight these cues in proportion to their reliability. Humans employ this strategy when combining multiple low-level cues within and across modalities, but less is known about mechanisms for integration in high-level perception. For example, faces convey identity information through static (e.g., facial form) and dynamic (e.g. facial motion) cues. Coherent perception of facial identity would benefit from integrating such cues. In this project, we asked how the human visual system integrates information provided by facial form and motion to categorize faces. In a first psychophysics study, subjects categorized animated avatar faces, that could be independently manipulated in facial form and motion, into two previously learned identities based on facial form, motion and both cues combined. Similar to studies based on low -level stimuli, we expected that subjects integrate facial form and motion cues in an optimal fashion. One of the predictions of the optimal cue integration model is that subjects, on a trial-to-trial basis, reweigh a cue when its reliability changes. To test this prediction, we introduced an additional manipulation in which the facial form was made “old” thereby reducing its reliability. Finally, we compared three models that differ in how the visual system integrates facial form and motion information: (1) optimally, (2) by using only the most reliable cue, or (3) by computing a simple average of both cues. We found that the optimal model predicted subjects’ identity choices best, suggesting that this principle governs both low- and high-level perception. In a second neuroimaging study, we tested subjects on the same task using similar stimuli while simultaneously recording their neural activity using functional magnetic resonance imaging (fMRI). We found that neural activity in a face-sensitive region of the superior temporal sulcus (STS) was highly predictive for behavioural identity choices in the motion-only condition. To our knowledge, this is the first evidence of a neural correlate for identity-from-motion processing in the human brain. Surprisingly, area STS is localized in the dorsal pathway which was previously assumed to be involved in the processing of changeable aspects of faces. While the other conditions still need to be analysed, our reported results already profoundly constrain neural models of face perception. In summary, the results of this project have important implications for cognitive, computational and neural models of face perception that currently propose anatomically and functionally distinct neural pathways for the processing of facial form and motion. Moreover, the results will contribute to understanding, diagnosis and therapy of disorders involving dysfunctions of face perception found for example in prosopagnosia or autism spectrum disorders.

Projektbezogene Publikationen (Auswahl)

  • (2016). Dynamic reweighting of facial form and motion information during face recognition. European Conference on Visual Perception (ECVP), Barcelona, Spain, Perception, 45(ECVP Abstract Supplement), 87-88
    Dobs, K., Bülthoff, I., & Reddy, L.
  • (2016). Optimal integration of facial form and motion during face recognition. 16th Annual Meeting of the Vision Sciences Society (VSS), St. Pete Beach, Florida, USA, Journal of Vision, 16(12), 925
    Dobs, K., Bülthoff, I., & Reddy, L.
  • (2017). Near-optimal integration of facial form and motion. Scientific Reports, 7(1):11002, 1-9
    Dobs, K., Ma, W. J., & Reddy, L.
    (Siehe online unter https://doi.org/10.1038/s41598-017-10885-y)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung