Project Details
Projekt Print View

mSimCam – A Multimodal Simulation Environment for Camera-Based Sensing of Cardiorespiratory Activity

Subject Area Biomedical Systems Technology
Term since 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 534067722
 
Multimodal camera-based sensing is gaining more and more popularity in a medical context, as it allows non-contact monitoring of cardiac and respiratory activity. However, as most sensing is performed on facial videos, privacy concerns hinder the sharing of data between research groups, which makes independent validation difficult. This is particularly problematic when more and more approaches are based on data-driven (deep) machine learning. Moreover, the data that is publicly available lacks diversity in terms of subjects (skin tones, BMI, age, gender, health status, etc.) as well as measurement scenarios (lighting, movement artifacts, backgrounds, camera position, etc.). In several scientific and commercial domains, the concept of simulations has proven useful, ranging from the simulation of sub-atomic particles over the structures of proteins to the simulation of processing plants. In the realm of computer vision, simulated 3D environments have proven useful for application in autonomous driving. To the best of our knowledge, there is currently no framework that would allow the simulation of vital sign estimation from multimodal cameras. Still, a huge body of related work suggest that such an approach is both possible to implement and can provide value for real-life applications. The objective of the project is to generate digital twins for the synthesis of realistic, diverse and privacy-uncritical multimodal data for camera-based sensing of cardiorespiratory activity. For this, we propose to create a high-fidelity, open-source, multi-modal and multi-level simulation environment. Core of this environment is a 3D rendering pipeline that will synthesize realistic camera data (visual / depth imaging / thermography). The rendering pipeline is driven by a modular multi-level simulation for cardiorespiratory activity. Our project is driven by the following research hypotheses: 1) Multi-Level synthesis of cardiorespiratory signals can be used to drive a 3D rendering pipeline to synthesize camera data. 2) The synthesized camera data is realistic and useful as it allows the training and optimization of machine-learning approaches to extract cardiorespiratory signals from camera data. When trained with the added synthetic data, these approaches show superior performances when evaluated on real camera data. This will be especially true for cases that currently suffer from a lack of training data, for example extreme heart- and respiratory rates or a non-white skin tone. 3) The simulation environment will allow insights into real world measurement setups as it allows the simulation and systematic analysis of various influences (sensor-parameters, SNR, lighting, background, occlusion, movement, pathological states, …). In summary, the proposed project sets out to advance the field of unobtrusive vital sign estimation and will provide a framework to the scientific community that helps make results more reproducible and counteract biases.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung