Project Details
Model-Development in Neuroscience: Simplicity and Generalizability in Mechanistic Explanations
Applicants
Professor Dr. Jens Harbecke; Professor Dr. Holger Lyre, since 10/2020
Subject Area
Theoretical Philosophy
Term
from 2018 to 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 413568662
Explanations in neuroscience are often delivered by models of mechanisms. However, many brain mechanisms contain a large number of distinct components and span several levels of organization. As a consequence, neuroscientists often face a choice problem: Which components and levels should be included in a model? Or simply: Which model is the best one among a set of rival multi-level mechanistic models? In order to resolve such choice problems, neuroscientists use more or less explicit simplicity and generalizability measures, among other criteria. Simplicity targets questions such as: How many levels should be included in order to mechanistically explain a particular behavioral or cognitive phenomenon? How precisely should the components and interactions of these various levels be described? Simplicity considerations of this kind are of a very different nature than classical curve-fitting procedures. Generalizability concerns questions such as: To how many phenomena should a particular model apply, and over how many individuals and species should it generalize? Balancing the sometimes conflicting dual criteria of simplicity and generalizability is of paramount importance when developing models of multi-level brain mechanisms.The overall research question of this project concerns the norms and practices by which simplicity and generalizability are applied as criteria for the development of models of multi-level mechanisms in neuroscience. By determining on the basis of several in-depth case studies how these criteria are, and should be, applied in current neuroscientific research, this project aims to clarify how the best multi-level mechanistic explanations are developed and selected. Thus, it is expected to not only contribute to an improved philosophical conception of mechanistic explanation in neuroscience, but to also deliver normative guidelines for current scientific research. More concretely, the aims of this project are fourfold: (1) Generally, to develop an improved philosophical account of multi-level mechanistic explanation in neuroscience, with an emphasis on simplicity and generalizability as criteria of explanatory adequacy. (2) To articulate an account of simplicity as a criterion for assessing the explanatory adequacy of multi-level mechanistic explanations. (3) To articulate an improved understanding of generalizability in mechanistic explanatory practice. (4) Finally, a scientific objective is to articulate normative guidelines for model-development and model-selection to be used in future neuroscientific research.
DFG Programme
Research Grants
Ehemaliger Antragsteller
Professor Dr. Carlos Zednik, until 9/2020