Project Details
Progressive, Spatio-Temporal Consistent Inline Activity Reconstruction and Recognition
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
since 2025
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 561591579
The physical activities we do play a key role in the way we structure our lives. The type of activity, and how it is performed, can reveal a person's intention, routine, fitness, and state of mind. Therefore, in various research fields, ranging from cognitive science to healthcare, the interest in machine recognition of human activities is significantly growing. While the current approaches to spatio-temporal activity recognition mainly focus on direct analysis of sensor data, e.g. from IMUs, and abstract representations of the objects in the user’s environment, this project proposes to utilize the power of generating consistent semantic 3D scene reconstructions for interactive spatio-temporal activity analysis and abstraction. This project focuses on capturing human activities using data from body-worn RGB-D cameras, eye-tracking glasses, and wearable IMUs. Its objective is to investigate a new paradigm, moving from post-capture activity recognition to interactive activity reconstruction and recognition by progressively reconstructing the geometry of the user's surroundings and any manipulated objects in it, as well as the trajectories of the user's hand and the objects. Moreover, the proposed approach comprises novel methods for, ideally online, annotation and progressive visually assisted analysis of activities and activity sequences. This interactive feedback opens a completely new paradigm for users as domain experts to perform in-line monitoring of so-far unseen and new activities and activities sequences. The proposed concept poses three main hypotheses: (1) Semantic activity fusion, i.e. transferring and enhancing methods from online semantic scene reconstruction, allows for a holistic modeling and analysis of monitored activity processes. (2) Short-term analysis and abstraction of activities solve the problems related to the training data in machine learning for activity recognition. (3) The integration of a visual feedback enables online fusion of user annotations, leading to more accurate spatial activity models and to a more lightweight process of monitoring human activities. In this project, two prototypes will be developed to validate the wearable methodology yielding a holistic capture of user activities, particularly related to replicable and safety-critical manual processes.
DFG Programme
Research Grants
