Project Details
Projekt Print View

„Eye Hear U“ – Multimodal acquisition, simulation and audiovisual enhancement for the individual training of basic functional laparoscopic skills

Subject Area Human Factors, Ergonomics, Human-Machine Systems
General and Domain-Specific Teaching and Learning
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Data Management, Data-Intensive Systems, Computer Science Methods in Business Informatics
Term since 2021
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 454898091
 
Minimally-invasive endoscopic surgery is a well-established surgical practice. However, the inherent physical and operative restrictions, e.g. decoupled hand-eye-coordination, limited field-of-view and operating space as well as decreased depth perception, are demanding for both surgeon and equipment. Faced with an already complex intraoperative environment, surgeons experience even higher cognitive workload and are required to attain exceptional spatial awareness and instrumentation skill from training and live operations. Since spatial cognitive and orientation capabilities vary individually, the quality of laparoscopic training with physical and virtual simulators is directly dependent on the predisposition of novice surgeons. Thus, training effectiveness and a potential skill transfer to the operating room is generally not predictable. A missing component is the continuous tracking of training results limiting evaluation methods to be based on skill progression rather than outcome. As a consequence, the purpose of this research proposal is the development of a novel training assistance systems that acquires a continuous multimodal representation of an individual laparoscopic training process to calculate the current and overall training progression and, in response, to provide aural and visual feedback cues for an improved perception and internalization of trained laparoscopic instrument movements. A laparoscopic physical simulator extended with multiple sensor components will be used to generate a knowledge base of basic bimanual laparoscopic skills. Training progression and quality, currently assessed through subjective skill questionnaires, will be extended through the introduction of objective, machine-readable metrics as a form of unbiased description of laparoscopic expertise. Calculated metrics will then be used to feedback aural and visual cues to support awareness of incorrect motions and potentially increase learning process comprehension. Clinical novices will perform supervised training sessions on laparoscopic simulators. After initial priming with coordinative and dexterity tasks, each learner will conduct a training session for the setting of laparoscopic knots as the exemplary bimanual skill. Trainees will be divided randomly into subject and control groups, and training results will be anonymized and evaluated by independent clinical experts. Subsequently, training results will be added to the knowledge base and ntegrated into a demonstrator of a novel laparoscopic assistance system for the individual training of basic bimanual laparoscopic skills based on audiovisual feedback. The preliminary system functionality will then be validated and evaluated by clinical experts.
DFG Programme Research Grants
Major Instrumentation 3D-Stereoendoskopiesystem
Instrumentation Group 3920 Endoskope (Medizin)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung