Project Details
Projekt Print View

On-the-fly data synthesis for deep learning-based analysis of 3D+t microscopy experiments

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Developmental Biology
Term since 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 447699143
 
Multidimensional fluorescence microscopy allows capturing 3D videos (3D+t) of entire model organisms at high spatial and temporal resolution. Automated image analysis can be used to detect and segment fluorescently labeled structures like cell nuclei and plasma membranes and to follow their temporal dynamics in terabyte-scale 3D+t image data sets. However, to date there are no algorithms available yet that provide error-free segmentation and tracking results automatically. Moreover, deep learning-based methods that are potentially perfectly suited for such analysis tasks are not sufficiently applicable to these large-scale image analysis problems yet, due to a significant lack of suitable training data, limited GPU memory and the extreme time effort required for manual annotations.The aim of the proposed project is to develop new methods for generating synthetic training data that can be used for 3D segmentation and tracking tasks in developmental biology. Generated data will enable the training of data-hungry supervised deep learning models and for extending existing image analysis competitions by providing benchmark data sets for large-scale 3D+t problems. As a first step, we will create realistic ground truth images of frequently used model organisms including simulation of the embryo surface shapes and a realistic spatiotemporal distribution of artificial cells within the embryos. Simulated ground truth will encompass segmentation masks of cell nuclei and plasma membranes, realistic frame-to-frame displacements of cells, cell cycle stages as well as the complete cell lineage to cope with cell division events. Based on generative adversarial networks, we will develop image generators that produce realistically looking, time-resolved 3D microscopy images from the label image domain using unpaired image-to-image translation and instance-level conditioning. This involves the development of new topology-preserving losses and new ways of supplying the models with auxiliary information about the global image context such that processing of small local patches will still adhere to the global patterns of the specimen. All developed building blocks will be implemented as a generic software framework that is suitable for on-the-fly data generation with parametrizable difficulty levels. This will enable smooth curriculum learning without having to annotate a single raw image manually. For instance, training of a supervised segmentation model could start with relatively crisp images and the difficulty level can be successively raised by decreasing the signal to noise ratio, the object appearance or even object movements and cell cycle stages. Due to an effectively unlimited amount of training data that can be fed to the model and a new level of realism, we envision that the developed methods will largely solve the lack of available training data and to obtain well-generalizing deep neural networks for 3D+t microscopy-based experiments.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung