Detailseite
Projekt Druckansicht

DynaVision - Repräsentationale Dynamiken in der kortikalen Verarbeitung visueller Information

Fachliche Zuordnung Kognitive und systemische Humanneurowissenschaften
Kognitive, systemische und Verhaltensneurobiologie
Förderung Förderung von 2016 bis 2019
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 311771452
 
Erstellungsjahr 2019

Zusammenfassung der Projektergebnisse

The DynaVision project shed new light onto the computational mechanisms underlying one of the most remarkable features of our brains: the ability to extract meaning from the visual world around us. Central to the project was a characterisation of changes to the neural code as it dynamically expands through time and across multiple regions of the visual cortex. In a series of experiments, we combined source-based human magnetoencephalography, a novel form of representational dynamics analysis and recurrent deep neural network modelling. This interdisciplinary approach enabled us, for the first time, to measure, dissect, and model the representational dynamics unfolding simultaneously in multiple distinct cortical regions along the human ventral stream. By investigating the dynamic emergence of categorical divisions, bi-directional Granger causality between cortical regions, and direct inferential comparison of feedforward and recurrent deep neural network models, we were able to formally reject the feedforward account of visual object recognition, while providing compelling evidence for the importance of recurrent connectivity during visual inference. This has significant implications for models in computational neuroscience and computer vision applications, both of which are still dominated by feed-forward models. In addition to insights into the computational mechanisms of human vision, the project demonstrates an entirely novel approach for studying complex dynamic computations in biological brains by constraining large-scale recurrent neural network models with massively multivariate time-resolved measurements of brain activity. It thus guides the way toward using the unprecedentedly rich brain-activity data provided by modern measurement methods to constrain theory, as instantiated in deep neural network models. It furthermore opens new horizons for novel engineering applications that directly incorporate neural data into machine learning pipelines. To better understand the benefits of recurrence, deep neural networks with lateral and topdown connections were investigated in depth, including analyses of connectivity profiles, timed classification performance, and novel virtual cooling techniques that enabled us to target specific connection types throughout the network. These experiments demonstrated that recurrent connectivity not only leads to better classification performance, but also allows these systems to recycle neural resources to flexibly trade speed for accuracy in visual recognition. Finally, the project has made significant progress to further deep learning as a framework for understanding human cognition. This led to a well-cited summary article, the development of the largest to date ecologically valid image-set for computational neuroscience, and the demonstration that individual deep networks, just like the brain, exhibit individual differences.

Projektbezogene Publikationen (Auswahl)

  • (2017). A cross-validation approach to estimate the relative signal- and noise-content of ICA components. MEG UK 2017, Oxford, UK
    Kietzmann, T.C., Hauk, O., & Kriegeskorte, N.
  • (2018). Beware the beginnings: intermediate and higher-level representations in deep neural networks are strongly affected by weight initialisation. Cognitive Computational Neuroscience Meeting, Philadelphia, USA
    Mehrer, J., Kriegeskorte, N., & Kietzmann, T.C.
    (Siehe online unter https://doi.org/10.32470/CCN.2018.1172-0)
  • (2018). Deep Neural Networks Trained on Ecologically Relevant Categories Better explain human IT. European Conference on Visual Perception (ECVP), Trieste, Italy
    Mehrer, J., Kriegeskorte, N., & Kietzmann, T.C.
  • (2018). Representational dynamics in the human ventral stream captured in deep recurrent neural nets. Cognitive Computational Neuroscience Meeting, Philadelphia, USA
    Kietzmann, T.C., Spoerer, C.J., Sörensen, L.K.A., Cichy, R.M., Hauk, O., & Kriegeskorte, N.
    (Siehe online unter https://doi.org/10.32470/CCN.2018.1190-0)
  • (2019). Deep neural networks in computational neuroscience. In Oxford Research Encyclopedia of Neuroscience. Oxford University Press
    Kietzmann, T.C., McClure, P., & Kriegeskorte, N.
    (Siehe online unter https://doi.org/10.1093/acrefore/9780190264086.013.46)
  • (2019). Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition. Cognitive Computational Neuroscience Meeting, Berlin, Germany
    Spoerer, C.J., Kietzmann, T.C., & Kriegeskorte, N.
    (Siehe online unter https://doi.org/10.1101/677237)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung