Project Details
Interaction across Multiverses in Desired Realities
Applicant
Dr. Andreas Fender, Ph.D.
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
since 2026
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 563933827
Ambiguity and uncertainty of input has been a long-standing challenge within Human-Computer Interaction. In particular with mid-air interaction - a standard used by Virtual Reality applications - input can be highly uncertain due to several reasons. As opposed to conventional input, mid-air input relies on potentially error-prone tracking of hands or hand-held controllers. Even with hypothetically perfect tracking, uncertainties from the user (e.g., imprecise motions or natural jitter of the hands) make the input imprecise. Previous research aimed to resolve those uncertainties using probabilistic models - typically within a specific application context (e.g., text input with auto-complete). Users, however, always see a single outcome or need to disambiguate explicitly (e.g., selecting a word from a list of auto-complete suggestions). We propose a new multiverse paradigm in which uncertainty is a fundamental part of the application. As an instance of this paradigm, we describe an architecture that revolves around "Possibilities" as a generalization from entities in conventional universe applications. Our architecture allows applications to split into a visible multiverse whenever the input is uncertain. Users see all possibilities at once and simply interact with the desired possibility. Possibilities that are ignored naturally vanish over time. This approach retains the user's agency without resorting to explicit selection of the desired possibility. This also avoids the cost of error-correction. Due to our multiverse architecture, uncertainties can be resolved across application contexts. This means that an uncertain menu selection can be resolved later through interaction in the virtual world. For instance, a user selects a tool in a mid-air menu, but the button-press is uncertain so the selection could have meant a brush tool or a spray tool. Through follow-up interaction, i.e., doing brush movements versus doing spray movements around a virtual object, the earlier menu selection gets disambiguated. Our possibility-based paradigm handles such cross-context uncertainties implicitly, without sacrificing encapsulation and other aspects of scalable development of interactive Virtual Reality applications. Our paradigm opens up many challenges and opportunities from a technical as well as a human-factors perspective. In this project, we implement our multiverse-architecture, develop approaches for multiverse physics and rendering, as well as investigate the advantages and limitations of multiverse interaction through user studies.
DFG Programme
Research Grants
Co-Investigator
Professor Dr. Dieter Schmalstieg
