Project Details
Projekt Print View

Mechanistic and representational explanations in cognitive neuroscience

Subject Area Theoretical Philosophy
Term since 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 520194884
 
Cognitive neuroscientists explain cognitive phenomena such as perception, memory, or problem solving by describing the neural mechanisms underlying the phenomena. In doing so, they usually assume that some components of these mechanisms have representational properties. For example, neurons in the visual cortex are thought to represent certain stimulus features, which explains how the organism is able to perceive and interact with the world. However, the combination of mechanistic and representational explanations yields a tension: neuroscientific mechanistic explanations can prima facie refer exclusively to factors within the brain. Representational properties, though, supervene on the organism’s relations to its external world and/or past. This raises, what we dub, the compatibility challenge: can explanations in cognitive neuroscience be simultaneously mechanistic and representational? The compatibility challenge has not been sufficiently examined philosophically, though it is related to a problem familiar from philosophy of mind in the 1980sthe classical challenge. The project can be understood as the necessary and long overdue revision and reassessment of the classical challenge in light of recent developments in philosophy of science, philosophy of cognition, and cognitive neuroscience. We will approach the compatibility challenge by working in close collaboration with empirical researchers and by applying a novel method called “adversarial collaboration” to examine two sets of working hypotheses. The first one is: A. Cognitive neuroscience can do without representational explanations and rely solely on mechanistic explanation. B. The prominence of computational explanation in cognitive neuroscience explains why scientists still use representational vocabulary while at the same time showing that computational-mechanistic explanations are sufficient. The second set of working hypotheses is: C. Computational explanation provides the first step towards an improved account of representational content and -explanation. D. It is possible to develop an account of representations in terms of function-informational properties of computational vehicles. E. Wide explananda alone do not yet render representational explanation compatible with mechanistic explanation. F. The mechanistic framework can be extended so that it allows for function-informational properties of computational vehicles to figure in mechanistic explanation. The project will provide new insights into the role representations can play in mechanistic explanations of mental phenomena. Through its methodology, it is explicitly open-ended. The results will contribute to the general understanding of our mind and the scientific explanation of it, and will significantly contribute to a fundamental reorientation of the debate between representationalists and anti-representationalists.
DFG Programme Research Grants
International Connection Israel
International Co-Applicant Privatdozent Dr. Nir Fresco
 
 

Additional Information

Textvergrößerung und Kontrastanpassung