Project Details
Image-guided non-invasive tracking for radiotherapy using machine learning
Applicant
Professor Dr. Mattias Heinrich
Subject Area
Medical Physics, Biomedical Technology
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2015 to 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 286491894
Physiological motion of tumour patients is a prevailing uncertainty that leads to inaccurate delivery of radiation in both conventional radiotherapy and therapeutic ultrasound. Intra-operative guidance by magnetic resonance imaging or ultrasound, complemented with sophisticated image analysis is going to play a vital role in providing reliable, accurate and realtime information of tumour motion.The aim of this project is to advance the current state-of-the-art of intra-operative motion estimation by employing recent approaches from machine learning together with training data obtained with highly accurate keypoint registration. Instead of relying on classical statistical motion models, we plan to learn a cascade of nonlinear regression functions between image features and previously seen motion. Initial patient-specific image sequences acquired before the start of radiation delivery will be employed to generate training data using accurate but more time-consuming image registration. This will yield correspondences for relevant keypoints that are located on organ surfaces, vessels and the tumour itself. Building upon recent advances from the field of computer vision and machine learning, these correspondences are used to train a nonlinear model that links motion with specifically adapted robust image features that are learned for each modality using deep convolutional networks. By using shape augmentation together with population data and an online learning procedure, we will be able to limit the required number of frames in the training sequence.Incorporating the prior knowledge of pre-acquired images into the learned model will enable us to not only accurately track visible tumour or important vessel structures, but importantly also the shape and position of organs at risk, which will help to avoid radiation to healthy tissues and therefore side-effects of radiotherapy. The learned model may furthermore help to estimate the position of anatomies that are temporarily out of view or occluded. Due to the high computational efficiency of the planned regression model processing times of few milliseconds per image frame are expected.
DFG Programme
Research Grants