Project Details
UDNN: Scientific Understanding and Deep Neural Networks
Applicant
Professor Dr. Florian J. Boge
Subject Area
Theoretical Philosophy
Term
since 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 508844757
"Deep Neural Networks" (DNNs) are a special technique in the field of Machine Learning that has been recognized for its amazing successes in recent years. Especially in science, the use of such DNNs has led to real breakthroughs, such as recently in the prediction of three-dimensional protein structures based on amino acid sequences, which was previously virtually impossible. At the same time, however, such DNNs correspond to complicated, adaptable mathematical functions, and neither the functions themselves, nor the successful methods of their adaptation, are usually fully understood at present. Thus, for example, a DNN can successfully predict what a protein structure will look like without it being sufficiently clear how it does so. Worse, it is often unclear what (hidden) information is extracted from the data in this process; and in some cases, it can be shown that this is highly relevant information for a true understanding of, for example, protein folding mechanisms. However, a central goal of science, if not the central goal, is precisely understanding: What processes or mechanisms cause the protein to look the way it does? How is the folding of a protein related to its environmental conditions? Thus, for all its success, the intransparency described above also leads to a real obstacle to scientific progress. The present project aims to philosophically analyze the scientifically relevant parts of the research on "eXplainable Artificial Intelligence" (XAI) that deals with making such algorithms transparent; i.e., to examine "explanations" proposed therein in terms of philosophical notions of understanding as well as the possibility of new ways of explaining and/or understanding. Furthermore, the possibility of understanding without explanation with the help of DNNs, which has basically already been claimed by some philosophers, will be investigated. In addition, the limits of (scientific) XAI and DNN-heavy research shall be investigated with respect to the goals of science. That is, is the presence of partially unexplained DNN successes likely to shift the goals of science from understanding to pure prediction? Or will scientists continue to strive for understanding (and if so, how)?
DFG Programme
Independent Junior Research Groups