Project Details
Projekt Print View

Context-aware Cell Tracking in 3D+t Microscopy Image Data

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Developmental Biology
Cell Biology
Term since 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 534768108
 
Current fluorescence microscopes allow studying early embryonic development in 3D and over time (3D+t). To decipher large-scale tissue reorganization at the cellular level, automatic segmentation and tracking methods are of utmost importance to be able to cope with this potentially terabyte-scale 3D+t image data. A fundamental problem observed both during segmentation and tracking are data-intrinsic events such as cell movement, cell division or cell death, inhomogeneous expression of fluorescent dyes, imaging artifacts as well as algorithmic flaws. Reconstructing the lineage from the fertilized egg to maturely developed tissues and organs, however, is indispensable for answering fundamental questions in developmental biology and related fields. The aim of the proposed project is the development of a learning-based content- and context-aware cell tracking pipeline for 2D+t and 3D+t microscopy images. Due to the severe lack of annotated training data, many recent deep learning approaches cannot yet be applied in the realm of bioimage analysis and we will thus start the project with a comprehensive data synthesis approach. As a first step, we plan to automatically identify the cell cycle stage of cells contained in time-resolved single-cell image snippets. The annotated cell snippets will allow us to create semi-synthetic 3D+t image data that can be custom-tailored for the downsteam learning tasks including a perfect ground truth and variations of image quality, object density, appearance, shape, and dynamics of the objects. We will then use the synthetic ground truth images to develop an embedding-based iterative cell tracking algorithm including a context-aware postprocessing module. In particular, we plan to investigate how graph neural networks and transformer architectures can be exploited to resolve ambiguous link decisions by an increased spatiotemporal context. In the case of graph neural networks, we will experiment with local spatiotemporal graphs that connect neighboring nodes in the spatial and the temporal domain. The learned representation of our unsupervised stage identification will serve as powerful feature descriptors of the cells' appearance and the information about the cell cycle stage of each cell can be exploited to impose constraints on likely stage transitions during edge classification. In a similar fashion, we will investigate to what extent the transformer architecture is suitable for cell tracking and we will combine spatiotemporal positional encodings with the learned representations and information about the cell cycle stages to learn to predict the correct object associations. As transformers are capable of identifying even global dependencies, we envision that this bird’s-eye view will allow disambiguating persisting tracking conflicts. Ultimately, we will apply the methods to large 2D+t live-cell imaging experiments from high-content screens and to 3D+t image data of developing zebrafish embryos.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung