Peter Ochs has been appointed as Professor for 'Mathematical Methods in Computer Science' at the Department of Mathematics of the University of Tübingen in September 2020, where he is head of the 'Mathematical Optimization Group'. He is coopted with the Department of Computer Science and associate faculty of the IMPRS-IS. We support Peter Ochs with Cluster funding for a research project
The research focus of Peter Ochs is Mathematical Optimization for problems in Computer Vision and Machine Learning. The challenge is the design of efficient algorithms with theoretical guarantees for state-of-the-art models of applications. This is particularly important in Machine Learning and Computer Vision where the large dimensionality of the problems prohibits the usage of black box algorithms. His Mathematical Optimization Group works on problems ranging from abstract theory to practical applications. They investigate abstract classes of non-convex optimization problems with emphasis on non-smooth problems, which arise naturally in these applications. Non-smoothness poses fascinating challenges for the theoretical analysis and the practical integration.
| Learning Optimal Algorithms for Parametric Optimization Problem |
Cluster funded research project
- Team member: Michael Sucker (PhD Student)
- Project duration: Apr 2022 - Mar 2025
Aim of this project
Parameter estimation problems are particularly challenging in Machine Learning and Computer Vision, due to the high dimensionality of the data and parameter space as well as the natural need to model applications as non-smooth optimization problems. Therefore, the feedback for adjusting parameters is only obtained after a demanding computational process. Moreover, the complexity of this parameter adjustment requires iterative solvers. As a result, similar optimization problems must be solved many times, sometimes thousands or millions of times. The goal of this project is the reduction of this computational cost by exploiting the similarity of the problems through a family of parameterized optimization problems. We develop optimal algorithms with theoretical guarantees for specifically
defined classes of such parameterized optimization problems in a Bayesian learning framework, including non-smooth optimization problems.