Project Details
Projekt Print View

Variational Gradient Flows on Probability Spaces and Generative Models for Bayesian Inverse Problems in Image Processing

Subject Area Mathematics
Term since 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 530824055
 
Wasserstein gradient flows have received much attention for many years from both theoretical and application perspectives, and have recently become popular in deep learning approaches. In particular, several functionals were used besides the Kullback-Leibler divergence as the maximum mean discrepancy, and geometries different from those of Wasserstein spaces, such as the Stein metric, were exploited. In this project, we are interested in neural gradient flows for Bayesian inverse problems. We intend to model and analyze appropriate functionals on the space of probability measures whose minimizers are the posterior distributions related to given Bayesian inverse problems. Then, we will derive neural network-based models for generating these posterior distribution based on Wasserstein gradient flows. We are particularly interested in the case of maximum mean discrepancy functionals with non-smooth Riesz kernels. Here, Wasserstein gradient flows show a rich structure since the intrinsic dimension of the support of the measures in the flow might vary. Further, we intend to train invertible residual flows using Wasserstein steepest descent flows. In this context, we want to analyze the reversibility of ODEs in Wasserstein spaces. From the algorithmic point of view, we aim to derive proximal splitting algorithms for minimization problems on Wasserstein spaces using forward and backward schemes. As a special case, this includes the convergence analysis for an Euler forward scheme of gradient flows, which can be a starting point for proving the existence of Wasserstein steepest descent flows for certain functionals. In terms of applications, we first focus on problems in image processing. For certain applications, such as in material sciences, only small data sets are available and we will propose new variational models in which the regularization term uses only patch- or more general feature-based information. Finally, we intend to continue our studies on learning charts of manifolds with discrepancy flows. This will provide us with an explicit parameterization of the learned charts.
DFG Programme WBP Fellowship
International Connection France
 
 

Additional Information

Textvergrößerung und Kontrastanpassung