Project Details
Projekt Print View

The epistemology of machine learning: From bias to knowledge

Subject Area Theoretical Philosophy
Term since 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 511917847
 
The contemporary surge of methods in artificial intelligence calls for philosophical analysis of the epistemological issues involved. A particularly pressing topic for such analysis is bias in machine learning. There are various stages in the use of machine learning methods where algorithmic bias enters. Relatively neglected in current work, remarkably, is the stage where the actual learning takes place: where a machine learning algorithm generalizes from the training data. The study of the inevitable *inductive bias* that arises here is the domain of machine learning theory, but this mathematical approach does not yet provide a clear conceptual picture of the notion of inductive bias. What is missing is a unified epistemological account of inductive bias that subsumes the various technical analyses from learning theory. An account of inductive bias in terms of conditions for epistemic succes is a natural starting point for a more general account of how we gain knowledge through machine learning methods. Existing sketches of such an epistemology tend to single out the empirical nature of the use of machine learning methods, while much work in formal philosophy takes the other extreme in its focus on the ideal rationality of learning agents. What is still missing is an account that does justice both to the practice and the theory of machine learning. The aim of this project is to fill these two gaps: to develop a philosophical explication of the concept of inductive bias, and to incorporate this into a novel epistemology of machine learning. This work is reinforced by two extensive case studies that apply this explication to central contemporary debates around machine learning methods. The project thus consists of four work packages. First, we motivate and develop an explication of inductive bias. The leading idea is a general epistemological characterization in terms of the conditions and nature of a method’s successful learning, that for each specific method can be made precise with machine learning theory. Second, we employ our explication to advance the debate about the explanation of the biases and empirical success of deep neural networks. We further evaluate our explication, and identify possible conceptual limitations. Third, we employ our explication scheme to advance the debate on algorithmic fairness. We delineate the relations between epistemic and nonepistemic factors in inductive bias, and the interactions between inductive bias and algorithmic bias in general. Finally, we develop a pragmatist epistemology of machine learning methods. The leading idea is that a focus on the mediating role of inductive bias naturally aligns with a view on the nature of inquiry that is core to the Peircean pragmatist tradition in the philosophy of science.
DFG Programme Independent Junior Research Groups
 
 

Additional Information

Textvergrößerung und Kontrastanpassung