Model-Development in Neuroscience: Simplicity and Generalizability in Mechanistic Explanations
Final Report Abstract
Over a period of four years, the present project sought to explore the way neuroscientists develop models of neural mechanisms. The focus was to be placed on two normative constraints on neuroscientific modeling: simplicity, referring to the number of parameters in a describing model, and the number of components and levels of a described mechanism, and generalizability, referring to the degree to which models of mechanisms derived for systems of one type could be applied to systems of another type. The aim was to characterize the normative influence these constraints exert on contemporary neuroscientific modeling practices, with a particular focus on mathematical, statistical, and computational models of multi-level neural mechanisms. Although the project's research was philosophical in method, its thematic focus required regular interaction with researchers in neuroscience, cognitive science, and artificial intelligence. Thus, the project involved three co-PIs, a PhD student, and several student assistants working in the philosophy departments in Witten and Magdeburg, as well as numerous external project partners from the empirical and formal sciences. The project yielded 7 peer-reviewed journal publications, 4 other publications and 4 interdisciplinary workshops. In addition to these already published results, the project produced two journal articles that are currently under review. It also developed a novel Python script “mLCA” that can generate adequate and highly complex causal models and causal-mechanistic models based on Boolean data tables. The principal research results include: an analysis of the applicability of standard simplicity criteria of scientific modeling to multilevel models in cognitive neuroscience; a new algorithm for the generation of complex causal and causal-mechanistic models based on Boolean data tables; new insights on parsimony aspects of the regularity theory of causation and mechanistic constitution; an analysis of the connection between causal proportionality and causal parsimony; a taxonomy of mathematical and computational models in network and systems neuroscience; a normative framework for explainable artificial intelligence; and an exploration of the use of explainable artificial intelligence for scientific discovery.
Publications
-
Models and mechanisms in network neuroscience. Philosophical Psychology, 32(1), 23-51.
Zednik, Carlos
-
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence. Philosophy & Technology, 34(2), 265-288.
Zednik, Carlos
-
Counterfactual theories of causation and the problem of large causes. Philosophical Studies, 178(5), 1647-1668.
Harbecke, Jens
-
The Exploratory Role of Explainable Artificial Intelligence. Preprint volume for Philosophy Science Association 27th Biennial Meeting. Chicago, IL: Philosophy of Science Association.
Zednik, C. & Boelsen, H.
-
The methodological role of mechanistic-computational models in cognitive science. Synthese, 199(S1), 19-41.
Harbecke, Jens
-
The State Space of Artificial Intelligence. Minds and Machines, 30(3), 325-347.
Lyre, Holger
-
Causal Proportionality as an Ontic and Epistemic Concept. Erkenntnis, 88(6), 2291-2313.
Harbecke, Jens
-
Scientific Exploration and Explainable Artificial Intelligence. Minds and Machines, 32(1), 219-239.
Zednik, Carlos & Boelsen, Hannes
-
Das Supervenienzargument. Handbuch Philosophie des Geistes, 195-205. J.B. Metzler.
Harbecke, Jens
-
Grundlagenfragen der Neurocomputation und Neurokognition. Philosophisches Handbuch Künstliche Intelligenz, 359-383. Springer Fachmedien Wiesbaden.
Lyre, Holger
-
Multiple Realisierbarkeit. Handbuch Philosophie des Geistes, 159-168. J.B. Metzler.
Lyre, Holger
