Project Details
Projekt Print View

Artificial intelligence: Who bears responsibility for what damage of an action?

Applicant Dr. Eva Buddeberg
Subject Area Practical Philosophy
Principles of Law and Jurisprudence
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 512921364
 
The rapid development of the use of artificial intelligence (AI), for example in the field of medicine, as well as its social consequences are of central importance for the current discourse in politics and science. From a jurisprudential perspective, dealing with violations of legal rights and damages that can be caused by or are related to AI-based systems is particularly challenging: Current approaches to solving this problem are largely limited to the application of liability regulations that have been in force up to now. However, this only insufficiently clarifies how, in view of the limited explicability of AI-based decisions, for example, the causality of damage is to be determined and who is liable for the damage caused by AI-based systems. The specific characteristics of AI-based applications pose massive challenges for liability law de lege lata with regard to the foreseeability and controllability of risks, also in the formulation of appropriate duties of care. In philosophy, it is generally assumed that AI-based systems themselves cannot (yet) be bearers of moral responsibility, as they do not fulfil certain prerequisites assumed for this, such as freedom, higher-level intentionality and the ability to act according to reasons. However, this does not fundamentally deny them the ability to exert influence in a morally relevant way. Thus, the philosophical debate is also about the question of who can be attributed responsibility for what when using such systems, which is generally full of prerequisites in terms of action theory and moral philosophy. Three different aspects of responsibility are particularly emphasised: 1. attribution of authorship of actions and the liability of the author for these actions; 2. attribution of a duty of care for certain tasks or areas of responsibility; and 3. the obligation to justify one's own behaviour and actions with good reasons. The main goal of the working group is a well-founded and interdisciplinary sustainable localisation of areas of responsibility in the use of AI. In doing so, the legal debate on liability issues is to be substantiated by moral philosophical and action-theoretical considerations in order to be able to address previously unanswered questions, such as the delimitation and intertwining of duties of care. At the same time, the moral philosophical and legal arguments can reinforce each other. Criteria will be developed that allow to define duties of care for the application of AI and to develop a liability concept from this, also for future unforeseeable developments in connection with AI. Such a standard could lead to greater legal certainty in dealing with AI and to more social acceptance in its application.
DFG Programme Scientific Networks
 
 

Additional Information

Textvergrößerung und Kontrastanpassung