Project Details
Projekt Print View

Feasibility, acceptance, and data quality of new multimodal surveys (FACES)

Subject Area Empirical Social Research
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 539621548
 
The project aims to open up a new multimodal data space for survey research from an analytical, survey research perspective and from a computer science perspective. This data space will use and further develop recent innovations in VR and in AI to replace face-to-face interviews, thus solving the problem of increasing costs and decreasing response rates of interviewer-based survey modes. To this end, we will develop a multi-interface system for interviewer-based online surveys based on VR and XR (Mixed Reality) and test and evaluate it through a series of experiments and in-depth comparisons with video-based methods. The system will cover a wide range of variability in terms of avatars and situational parameters of interviews, interfaces, and AI technologies for automatic processing of speech and behavioral data. To test this approach, we will systematically compare avatar-based, human-controlled life interviews with video-based life interviews. In the former, the behavioral degrees of freedom of the interactants will be significantly extended by the choice of avatars. Due to the large number of possible feature combinations, we will proceed in two ways: (1) the effects of avatar and situational features will be investigated in experiments in order to make pre-selections; (2) promising feature combinations will be identified that can be used in the intended interview context to test them under real conditions (real interviews). We will address three research questions: RQ1: What are the (dis)advantages of avatar-based interviews compared to video-based interviews in terms of acceptance, feasibility, and data quality? RQ2: Which interviewer effects are reduced by which combinations of features, and how do they interact (e.g. when do they strengthen or weaken each other)? RQ3: How can the answers to these questions be integrated into an avatar theory that makes the training of automated, fully immersive virtual interviewers transparent? To answer RQ1-3, the project goes far beyond existing studies by considering four scenarios. In scenarios 2 and 3 the interviewee's multimodal behavior is still active, but cannot be recorded as in scenario 1. Scenarios 1-3 are compared with scenario 4 to investigate effects of avatar-based interviews. They differ in the degree of immersion of the interviewee and thus allow comparative views on the impact of others-avatars in (partially) virtualized interviews, while scenario 1 allows the additional study of the Proteus effect. Scenarios 1-2 will be investigated by means of specialized experiments, and scenarios 3-4 will be additionally investigated under real interview conditions using former NEPS SC3 participants. In this way, the project will be the first to investigate self- and other-avatar effects in a coherent way, both in terms of avatar features and in comparison to classic interviews. As the project will develop an open source system, it will pave the way for future avatar-based life surveys.
DFG Programme Infrastructure Priority Programmes
 
 

Additional Information

Textvergrößerung und Kontrastanpassung