Project Details
Projekt Print View

Model-Development in Neuroscience: Simplicity and Generalizability in Mechanistic Explanations

Subject Area Theoretical Philosophy
Term from 2018 to 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 413568662
 
Final Report Year 2023

Final Report Abstract

Over a period of four years, the present project sought to explore the way neuroscientists develop models of neural mechanisms. The focus was to be placed on two normative constraints on neuroscientific modeling: simplicity, referring to the number of parameters in a describing model, and the number of components and levels of a described mechanism, and generalizability, referring to the degree to which models of mechanisms derived for systems of one type could be applied to systems of another type. The aim was to characterize the normative influence these constraints exert on contemporary neuroscientific modeling practices, with a particular focus on mathematical, statistical, and computational models of multi-level neural mechanisms. Although the project's research was philosophical in method, its thematic focus required regular interaction with researchers in neuroscience, cognitive science, and artificial intelligence. Thus, the project involved three co-PIs, a PhD student, and several student assistants working in the philosophy departments in Witten and Magdeburg, as well as numerous external project partners from the empirical and formal sciences. The project yielded 7 peer-reviewed journal publications, 4 other publications and 4 interdisciplinary workshops. In addition to these already published results, the project produced two journal articles that are currently under review. It also developed a novel Python script “mLCA” that can generate adequate and highly complex causal models and causal-mechanistic models based on Boolean data tables. The principal research results include: an analysis of the applicability of standard simplicity criteria of scientific modeling to multilevel models in cognitive neuroscience; a new algorithm for the generation of complex causal and causal-mechanistic models based on Boolean data tables; new insights on parsimony aspects of the regularity theory of causation and mechanistic constitution; an analysis of the connection between causal proportionality and causal parsimony; a taxonomy of mathematical and computational models in network and systems neuroscience; a normative framework for explainable artificial intelligence; and an exploration of the use of explainable artificial intelligence for scientific discovery.

Publications

 
 

Additional Information

Textvergrößerung und Kontrastanpassung