Project Details
Projekt Print View

Norms and Normative Expectations in Algorithmic Decision-Support Systems

Subject Area Empirical Social Research
Term since 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 537661468
 
The popularity of algorithms, big data, and artificial intelligence (AI) in many areas of daily life has sparked debates about the role of norms in algorithmic systems. Chief among them are concerns about algorithmic systems violating norms of fairness, although questions about the authority of decision-makers and the use of performance data have also arisen. Since technological systems are embedded in sociocultural contexts, their effects can only be fully grasped when the latter is considered. This is specifically true for the workplace, which has become an increasingly important area of application for algorithmic decision support systems. Algorithmic decision support systems (ADSS) are used, e.g., for personnel selection (hiring), wage setting, career development (promotion) and termination. Social acceptance of such systems depends on the predominant social norms in an organisation. Algorithmic systems need to conform with existing social norms, as individuals might react to norm violations by voicing out, retaliating, or exiting the organisation. In this project, we will focus our attention on the role of norms in and towards ADSSs in employer-employee relationships. More specifically, we will study through surveys both employees’ and employers’ normative expectations of their use in the workplace, which norms these systems need to consider and how employees with and without supervisory function react to the use of norm violating ADSSs. In addition to the role of norms in such systems, we will focus on normative structures within organisations by investigating how they moderate the acceptance of ADSSs across organisations. Survey experiments will allow us to understand how individuals react to norm violations and which role technical literacy plays in acceptance of AI-guided decision making. With this project, we respond to increasing calls for understanding the impact of algorithms and artificial intelligence on society and for integrating research across scientific disciplines, such as computer science and social sciences. To date, research in this realm has been dominated by work from computer science, where researchers are experienced with the technical aspects of algorithms but lack a background in theorising, measuring, and explaining social norms and normative structures of the contexts in which algorithmic systems are deployed. Moreover, ambiguity in the terminology of, e.g., fairness norms, and often little theory-driven, post hoc explanatory studies of perceptions and norms in algorithmic systems have led to a plethora of findings that do not generalise beyond a study’s specific context. Our framework for conceptualising and measuring perceptions of algorithm-based decision-support systems and their normative foundations will help academics, the public, and those developing and using algorithmic decision systems understand such systems’ impact and acceptance.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung