Detailseite
Projekt Druckansicht

Vereinheitlichung von Glatten und Nicht-glatten Nicht-konvexen Optimierungsalgorithmen

Antragsteller Professor Dr. Peter Ochs
Fachliche Zuordnung Bild- und Sprachverarbeitung, Computergraphik und Visualisierung, Human Computer Interaction, Ubiquitous und Wearable Computing
Mathematik
Förderung Förderung von 2018 bis 2022
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 397569761
 
Erstellungsjahr 2022

Zusammenfassung der Projektergebnisse

The goal of the project was the development of an abstract algorithmic framework that unifies several algorithms and convergence guarantees that are important for applications in image processing, computer vision and machine learning. High expectations were imposed on the framework to yield highly efficient, easily implementable, flexible algorithms that can be accelerated with state-of-the-art convergence guarantees in the general setting of non-smooth non-convex optimization. The key idea is the observation that simple algorithmic schemes such as Gradient Descent, Newton’s Method or the Conditional Gradient method essentially rely on sequential minimization of simple surrogates to the objective function. Due to smoothness, mostly, these surrogates are constructed based on Taylor expansion and a distance term that controls the progress of the algorithm to achieve convergence guarantees. In the non-smooth setting, similar Taylor-like model functions can be constructed and efficiently used for updating within algorithms, however these models are not unique anymore. We contributed to the understanding of the actual flexibility of such a framework in the non-smooth setting. Moreover, significant additional generality could be added by formulating an appropriate class of distance functions: Bregman distances. We directly implemented the most recent algorithmic results for Bregman distances in our framework leading to direct (non-alternating) solution strategies to matrix factorization and deep linear neural network problems, which was beyond the initial goals. Also regarding the incorporation of acceleration within the abstract model function framework, we contributed beyond our initial expectations by developing a state-of-the-art convex-concave inertial backtracking strategy, that (in the non-convex setting) is able to adapt to local convexity of the objective function. This development satisfied the high expectations for a practical algorithmic framework in the non-smooth non-convex setting, which we could eventually complement with the best possible convergence guarantees in this challenging setup by proving that the iterates generated by the algorithm converge to a stationary point. In summary, we neglected a small part of our initial plan to make achievements of the overall framework that go beyond the expectations of the project.

Projektbezogene Publikationen (Auswahl)

  • Adaptive Fista for Non-convex Optimization. SIAM Journal on Optimization, 29(4):2482-2503, 2019
    P. Ochs, T. Pock
    (Siehe online unter https://doi.org/10.1137/17M1156678)
  • Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms. Conference on Neural Information Processing Systems (NeurIPS), 2019
    M.C. Mukkamala, P. Ochs
  • Model Function Based Conditional Gradient Method with Armijo-like Line Search. In K. Chaudhuri, R. Salakhutdinov (Eds.): International Conference on Machine Learning (ICML). Proceedings of Machine Learning Research, Vol. 97, 4891-4900, PMLR, 2019
    Y. Malitsky, P. Ochs
    (Siehe online unter https://doi.org/10.48550/arXiv.1901.08087)
  • Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms. Journal of Optimization Theory and Applications, 181(1):244- 278, 2019
    P. Ochs, J. Fadili, T. Brox
    (Siehe online unter https://doi.org/10.1007/s10957-018-01452-0)
  • Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization. SIAM Journal on Mathematics of Data Science, 2(3):658-682, 2020
    M.C. Mukkamala, P. Ochs, T. Pock, S. Sabach
    (Siehe online unter https://doi.org/10.1137/19M1298007)
  • Bregman Proximal Framework for Deep Linear Neural Networks. International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). Lecture Notes in Computer Science, Springer, 2021
    M.C. Mukkamala, F. Westerkamp, E. Laude, D. Cremers, P. Ochs
    (Siehe online unter https://doi.org/10.48550/arXiv.1910.03638)
  • Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms. Journal of Global Optimization, 83:753- 781, 2022
    M.C. Mukkamala, J. Fadili, P. Ochs
    (Siehe online unter https://doi.org/10.1007/s10898-021-01114-y)
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung