Bachelor of Science (B.Sc.) & Master of Science (M.Sc.):
Available Thesis Topics
Both Bachelor and Master theses are extensive pieces of work that take full-time committment over several months. They are thus also deeply personal decisions by each student. This page lists a few topics we are currently seeking to address in the research group. If you find one of them appealing, please contact the person listed here for each project. If you have an idea of your own — one that you are passionate about, and which fits into the remit of the Chair for the Methods of Machine Learning — please feel invited to pitch your idea. To do so, first contact Philipp Hennig to make an appointment for a first meeting.
We want to extend the functionality of our backpropagation library BackPACK  presented at ICLR 2020.
It is a high-quality software library that computes additional and novel numerical quantities with automatic differentiation that aim to improve the training of deep neural networks. Applicants should be interested in automatic differentiation and be experienced in PyTorch and Python. Students will learn about deep neural networks' operations and their autodifferentiation internals, as well as the quantities extracted by BackPACK. They thus offer an opportunity to gain expert knowledge in the algorithmic side of deep learning. Both are challenging projects, which require familiarity with the manipulation of tensors (indices!) and multivariate calculus (automatic differentiation). A significant amount of time will be spent on software engineering, as the works will be fully integrated into BackPACK and hopefully released in a future version. Results will be presented in forms of runtime benchmarks similar to the original work . The students are encouraged to investigate further applications.
 F. Dangel, F. Kunstner & P. Hennig: BackPACK: Packing more into Backprop (2020)
Currently no projects available. Please feel free to reach out to Felix Dangel with your own ideas.
Linear systems A x=b are the bedrock of virtually all numerical computation. Machine learning poses specific challenges for the solution of such systems due to their scale, characteristic structure and their stochasticity. Datasets are often so large that data subsampling approaches need to be employed, inducing noise on A. In fact, usually only noise-corrupted matrix-vector products are available. Typical examples are large-scale empirical risk minimization problems. Classic linear solvers such as CG typically fail to solve such systems accurately since they rely on errors within machine precision.
Probabilistic linear solvers  aim to address these challenges raised by ML by treating the problem of solving the linear system itself as an inference task. This allows the incorporation of prior (generative) knowledge about the system, e.g. about its eigenspectrum and enables the solution of noisy systems.
 Hennig, P., Probabilistic Interpretation of Linear Solvers, *SIAM Journal on Optimization*, 2015, 25, 234-260
Currently no projects available. Feel free to reach out to Jonathan Wenger with your own ideas.
Bayesian quadrature (BQ) treats numerical integration as an inference problem by constructing posterior measures over integrals given observations, i.e. evaluations of the integrand. Besides providing sound uncertainty estimates, the probabilistic approach permits the inclusion of prior knowledge about properties of the function to be integrated and leverages active learning schemes for node selection as well as transfer learning schemes, e.g. when multiple similar integrals have to be jointly estimated.
Supervisor: Maren Mahsereci
Bayesian quadrature uses Gaussian processes as surrogate model which are then being integrated with respect to an integration measure.
This procedure is analytic only for a few combinations of covariance functions and integration measures.
Hence, often the re-weighing trick is used which is similar to using a proposal distribution in importance sampling.
The student will analyse empirically how much influence the re-weighing trick has on the solution of the Bayesian quadrature routine and compare it to the influence of the proposal distribution of importance sampling.
 C. Rasmussen & Z. Ghahramani Bayesian Monte Carlo, NeurIPS 2003
Ordinary differential equations (ODEs) are central to mathematical models of physical phenomena. For example, the spread of a disease in a population can be predicted by approximating the solution of an ODE. Classical numerical analysis has developed a rich body of methods regarding the solution of this task.
By taking a probabilistic perspective, it is possible to derive an algorithm that returns a probability distribution describing the ODE solution. The variance of this posterior distribution is not only informed about numerical accuracy of the approximation but can be leveraged inside a chain of computation which has been useful, for instance in parameter inference problems involving ODEs.
Supervisor: Nicholas Krämer
Straight lines on manifolds, known as geodesics, are essential to computational geometry and therefore play a central role modern data analysis. Geodesics can usually not be computed analytically. Instead, one has to numerically solve boundary value problems (BVPs), which are ordinary differential equations subject to an initial condition and a terminal condition. In the proposed project, the student investigates the potential of using recent versions of probabilistic BVP solvers  for computation of geodesics [2,3]. The goal is to see how much one gains in terms of (i) computational speed and (ii) quantification of (numerical) uncertainty by using probabilistic BVP solvers.
 Nicholas Krämer and Philipp Hennig. Linear-time probabilistic solutions of boundary value problems. arXiv:2106.07761.
 Georgios Arvanitidis, Soren Hauberg, Philipp Hennig and Michael Schober. Fast and robust shortest paths on manifolds learned from data. AISTATS 2019.
 Philipp Hennig and Soren Hauberg. Probabilistic solution to differential equations and their application to Riemannian statistics. AISTATS 2014.
Bayesian inference is a principled way to enable deep networks to quantify their predictive uncertainty. The key idea is to put a prior over their weights and apply Bayes' rule given a dataset. The resulting posterior can then be marginalized to obtain predictive distribution. However, exact Bayesian inference on deep networks is intractable and thus one must resort to approximate Bayesian methods, such as Laplace approximations, variational Bayes, and Markov-Chain Monte-Carlo. One of the focus is to design cost-effective, scalable yet highly performant approximate Bayesian neural networks, both in terms of predictive accuracy and uncertainty quantification.
Currently no projects available. Feel free to reach out with your own ideas.