Detailseite
Projekt Druckansicht

Dynamische aufgabenabhängige Dekodierung visueller Bewegungsinformation während kontinuierlichem Verhalten

Antragsteller Dr. Jonas Knöll
Fachliche Zuordnung Kognitive, systemische und Verhaltensneurobiologie
Förderung Förderung von 2014 bis 2016
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 254017722
 
Erstellungsjahr 2016

Zusammenfassung der Projektergebnisse

We investigated the task dependent dynamics of behavior and neural processing in naturalistic environments including continuous stimulation and naturally occurring continuous behavior. To this end we developed and established a novel paradigm in conjugation with a regression model to optimally analyze the data obtained during multiple complex tasks and during stimulation that is highly correlated in time and space, as it occurs in the natural environment. In this paradigm, the focus of expansion (FOE) of a cloud of dots is moving across the screen in a random manner with dots being organized in a hexagonal mesh of subfields in the visual field. Motion of dots in these field could be slightly disturbed or no motion could be shown. We recorded tracking behavior from a macaque and humans and performed single cell recordings during this paradigm in areas MT and MST of a macaque with a subset of the data being recorded simultaneously in both areas. The ongoing analysis of the obtained neural data will yield detailed information of the neural encoding of the recorded neurons as well as the modulation of those parameters when a part of the visual information is actively selected or ignored to solve a specific task. Analysis of the simultaneously recorded neurons of MT and MST will provide insights on how information is transformed from one area to the other and how selective this process is. We found the subject's gaze to be mostly influenced by the previous one hundred to five hundred milliseconds of motion shown. In this time window, gaze was predominantly determined by information shown in the hexagon closest to the current gaze as well as the ring of hexagons surrounding this. Using the temporal and spatial integration parameters, the average gaze was well predicted in a subset of trials that were not used to obtain these parameters. This subset of trials was also designed to have an identical stimulation on each repeat. Gaze in these conditions was very predictable with similar saccades oftentimes occurring within a few hundred milliseconds across different repeats, showing a strong behavioral control despite very little imposed strategies in such a rich paradigm. When two statistically independent FOEs were shown at the same time in continuously varying subfields, subjects were able to track one of the two FOEs with almost no interference from subfields with motion from the untracked FOE, showing a good ability to select visual information based on the current task being solved. The established paradigm has a wide range of possible future applications due to its intuitive nature that does not require prolonged training and an efficient collection of data, with temporal integration kernels obtained in as little as 4 minutes of data. This may make this paradigm suitable to study or detect visual impairments in human populations that are inaccessible for studies requiring complicated task structures, long training periods or long sessions of data collection. Additionally this task should allow studying visual selection in animal models with lower cognitive abilities compared to macaques, such as the common marmoset, where training complicated attention or decision tasks may be impossible or unfeasible. This will provide the unique opportunity to combine the the study of higher cognitive functions such as selection and decision making known from research in macaques with experimental techniques not available in these primates.

Projektbezogene Publikationen (Auswahl)

  • (2015). Visual motion processing during continuous naturalistic behaviors. Program No 600.18/L22, Society for Neuroscience
    Knöll, J., Pillow, J. W., Huk, A. C.
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung