Project Details
Neural mechanisms of multi-object context in human vision
Applicant
Dr. Oliver Contier
Subject Area
Biological Psychology and Cognitive Neuroscience
Term
since 2026
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 580995469
Humans understand visual scenes by integrating information that emerges from multiple objects. A cup beside a toothbrush signals a bedtime routine, whereas the same cup next to a laptop suggests working. This project asks a fundamental question: How does the brain represent multi-object context? Current frameworks emphasize object and scene pathways, but evidence for multi-object context is limited to isolated relations, and AI models confound context with object and scene features. I propose a computational approach that isolates multi-object context as a distinct level of representation and identifies its neural basis. Work Package 1 develops this approach by predicting a model’s response to multiple objects from its responses to the isolated objects, identifying the residuals as purely contextual features. Applying this to state-of-the-art AI models of scene understanding, I will derive multi-object context embeddings for MS COCO images and link them to large-scale fMRI responses from the Natural Scenes Dataset. The outcome will localize neural systems encoding multi-object context. Work Package 2 tests spatial versus semantic dimensions of multi-object-context in a model-informed fMRI experiment manipulating semantic context and spatial arrangement. Analyses will localize regions selective to each dimension and test whether these dimensions are separable or integrated within distributed patterns. The hypothesis is that spatial and semantic context engage both distinct cortical regions and shared distributed patterns, reflecting partially overlapping but specialized systems. Expected results: (1) evidence for multi-object context as a distinct representation separable from object and scene vision; (2) dissociable networks for spatial and semantic context across ventral, dorsal, and medial temporal systems; (3) convergence in higher-order associative regions; (4) a computational approach applicable across AI models to disentangle multi-object context in human and machine vision. Together, these results will advance a unified framework for how the brain extracts meaning from multiple objects, bridging the gap between object and scene vision, and informing the development of context-sensitive AI. The project is highly feasible, building on my expertise in large-scale brain-behavior modeling, encoding approaches, and open resources. I have a strong record linking large-scale neural data with AI models, and risk is minimized by leveraging high-quality datasets and pipelines. The fellowship at the Donders Institute with Prof. Marius Peelen provides world-class infrastructure and expertise in fMRI and model-based neuroscience, enabling advanced training in fMRI design and AI models. Thus, the Walter Benjamin Fellowship is a crucial step in consolidating my independent profile and career goal to establish research groups in Germany bridging cognitive neuroscience and artificial intelligence.
DFG Programme
Fellowship
International Connection
Netherlands
