Project Details
Neural correlates of trust in human-AI interaction
Subject Area
Human Factors, Ergonomics, Human-Machine Systems
General, Cognitive and Mathematical Psychology
Biological Psychology and Cognitive Neuroscience
General, Cognitive and Mathematical Psychology
Biological Psychology and Cognitive Neuroscience
Term
since 2025
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 568432038
Cognitive offloading refers to the use of external tools, such as automation, to enhance human cognition by reducing cognitive load. Recently, technologies powered by artificial intelligence (AI), have evolved from static automation to dynamic collaborators, capable of handling complex tasks and adapting to the user. In human-automation interaction (HAI), AI can improve efficiency, expand skill sets, and free up mental resources, helping to overcome two crucial bottlenecks of human cognition: visual selective attention (selecting relevant visual information) and visual working memory (maintaining selected information). However, AI’s effectiveness depends on well-calibrated trust—humans must neither rely too heavily nor too little on AI. Measuring trust in automation and AI is challenging. Previous research has used self-report measures (e.g., questionnaires) and behavioral measures (e.g., reaction times) to examine trust. However, self-reports are subjective, disrupt the task flow, cannot track trust dynamics with high temporal resolution, and often reveal the study’s purpose. Behavioral trust measures are more objective but cannot assess attitudinal trust, cognitive states, and are often affected by external factors like workload or perceived risks. In this research proposal, we suggest a novel approach using event-related potentials (ERPs) from the electroencephalogram (EEG) as neural markers of trust. We will focus on well-established ERP components that measure visual attention (N2pc) and working memory (CDA) to assess the degree of cognitive offloading to an AI. EEG measures offer important advantages over self-report and behavioral measures: they are objective and implicit, can dynamically track offloading due to their high time resolution, and avoid task disruptions. Our approach is to have humans collaborate with a simple algorithm framed as AI in visual search and change detection tasks. We will use the N2pc and CDA to measure how much of the task is offloaded to the AI and hence how much humans trust the AI. In the 1st work package, we will demonstrate that the N2pc and CDA are suitable for measuring offloading in visual search and change detection tasks. In the 2nd work package, we will apply N2pc and CDA as tools to examine the role of perceived own performance, perceived AI performance, as well as risk and transparency, in shaping trust. In the 3rd work package, we will use the N2pc and CDA to explore trust dissolution and restoration after trust violations and identify the most effective repair strategies. In the 4th work package, we will examine the utility of N2pc and CDA in the context of adaptive AI that dynamically adjusts to user behavior and provides specific assistance to address cognitive bottlenecks. In summary, this research will provide a novel testbed for neural trust measures, allowing HAI researchers to measure cognitive offloading and trust in AI more objectively, reliably, and validly.
DFG Programme
Research Grants
