Project Details
Projekt Print View

Egocentric biases meet biased algorithms

Subject Area Social Psychology, Industrial and Organisational Psychology
Communication Sciences
Term since 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 512365147
 
Group discrimination by algorithms often gets media attention when scandalous discrimination cases are revealed, such as the hiring AI by Amazon that disadvantaged women. What gets far less attention is that the people who judge the fairness of algorithms and evaluate the acceptance of (biased) algorithmic decision-making might be biased, too. We know from social-psychological research that people tend to favor the fairness rule with which they are better off (= higher outcome favorability) in situations in which different fairness rules can be applied, a so-called egocentric bias. Existing studies on algorithmic fairness largely ignored such motivational biases. One might argue that the problem of group discrimination will no longer be relevant in a few years because the machine learning community is aware of it and has worked on the optimization of different fairness parameters. However, these solutions always come with trade-offs because the optimization of one parameter goes hand-in-hand with decreases in other parameters. Motivational biases (preferring the option that results in higher outcome favorability) might also influence these trade-offs but have not been considered yet in this even newer field. The overarching aim of this proposal is to fill this gap, and we will do so in several steps. In the first step, we aim to empirically demonstrate that the issue of group discrimination by algorithms is either not known or not very salient among participants in typical algorithm acceptance studies; consequently, algorithmic discrimination should only be recognized when awareness for this issue is heightened. The second goal is to demonstrate that egocentric biases play a role when group discrimination is made salient. More specifically, people from privileged groups (vs. people from disadvantaged groups) should evaluate a biased algorithm favoring their own group as fairer and find it more permissible that the algorithm makes this decision. The third goal of the project is to show that such biases also influence the preference for certain trade-offs that must be made when developing technical solutions for group discrimination by algorithms. We will first develop a multi-dimensional fairness measure and explore the level of awareness of algorithmic group discrimination in a representative sample. Next, we plan a series of preregistered and well-powered experiments to examine the interplay between algorithmic and human biases. To demonstrate the robustness of the findings, we will show the effects across various domains (sexism, racism, discrimination based on an arbitrary criterium). Since the topic of group discrimination is highly societally relevant, we will also conduct experiments with representative samples and with samples with a high proportion of migrants.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung