Detailseite
Projekt Druckansicht

Egozentrische räumliche Referenzsysteme im menschlichen Gehirn

Fachliche Zuordnung Kognitive und systemische Humanneurowissenschaften
Förderung Förderung von 2015 bis 2021
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 289250398
 
Erstellungsjahr 2021

Zusammenfassung der Projektergebnisse

In this project, we studied the neural representations of ego-centric spatial reference frames in the human brain. When we move our eyes, or our head, the world around us appears as stable, even though our own motion generates visual flow across our retina. In addition, objects that were visible prior to our head rotation may now be located on our side or behind us, outside the visual field. Despite this, we have acute knowledge of their locations, even when they are outside our visual field of view. In a series of functional magnetic resonance imaging (fMRI) experiments, we addressed the question which brain regions support the encoding of space around us, in the distinct reference frames linked to the sensor (retina), the head, and the body, respectively. We also examined mechanisms involved in integrating visual (retinal) signals with the non-visual signals that inform the brain about the eye-tohead, and the head-to-body -angles and -motion, and we examined related questions on neural feedback and processing of the purely visual ego-motion cue of motion parallax. This project used novel fMRI techniques, including active head-motion inside the fMRI scanner, and relied on virtual reality stimuli and multivariate analysis techniques. First, we found preliminary evidence that in addition to parietal regions, also retrosplenial and medial temporal cortex encode space outside the field of view in head-centered reference frame. Novel headrotation experiments yielded preliminary evidence that spatial locations in a body-centered reference frame are encoded in anatomically different regions, including the peri-sylvian network, medial temporal lobe, and ventral parahippocampal cortex. The results open the prospect of understanding the function of regions known from lesion studies of neglect and ataxia in the healthy brain. Second, we identified neural substrates integrating visual motion flow with active head rotations. We achieved this by scanning participants while they actively rotated their head during fMRI acquisition. Their head-rotation was measured in real-time using video-tracking that in turn updated their headmounted displays. This allowed simulation of a stable versus unstable visual world during head rotation while precisely controlling for retinal visual input. We found that the anterior parieto-insular cortex (aPIC), a region in the peri-sylvian cortex associated to vestibular processing, integrated visual with non-visual head-rotation signals. In visual cortex, several regions were involved, including ventral intraparietal cortex (VIP), cingulate sulcus visual area (CSv), and the motion-responsive region in precuneus (Pc), and motion area V6. Third, we found that during active pursuit eye-movements, primary visual area V1 (and to a lesser extent V2 and V3) represent visual motion in a head-referenced manner, hence signalling real or objective motion in addition to retinal motion. This is remarkable, as only high-level motion regions V3A and V6 show stronger real-motion responses than V1, with only weak objective motion signals in motion regions V5/MT and MST. Fourth, we found that motion parallax, a visual high-level cue of ego-motion, is processed primarily in parietal regions IPS3 and IPS4, as well as in transverse occipital sulcus (TOS). TOS entertained specific functional connectivity during parallax processing with scene region PPA and IPS3, indicating a link between ventral and dorsal cortex during parallax processing. Finally, we found that high-level motion regions, including entorhinal cortex, change their functional connectivity with primary visual cortex V1 specifically during optic flow processing, indicative of predictive coding processes explaining away complex structured visual input in V1.

Projektbezogene Publikationen (Auswahl)

 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung