Bachelor of Science (B.Sc.) & Master of Science (M.Sc.):

Available Thesis Topics

Both Bachelor and Master theses are extensive pieces of work that take full-time committment over several months. They are thus also deeply personal decisions by each student. This page lists a few topics we are currently seeking to address in the research group. If you find one of them appealing, please contact the person listed here for each project. If you have an idea of your own — one that you are passionate about, and which fits into the remit of the Chair for the Methods of Machine Learning — please feel invited to pitch your idea. To do so, first contact Philipp Hennig to make an appointment for a first meeting.

BO Arena: an open-source benchmarking platform for Bayesian optimisation (BSc/MSc Thesis/Project)

Supervisor: Thomas Christie & Colin Doumont

Bayesian optimisation (BO) has evolved from toy problems to critical real-world applications. Tech giants like Meta (e.g. the Adaptive Experimentation team) and startups like Secondmind and BigHat Biosciences now rely on it for tasks ranging from electric motor design to molecular search.

Despite this progress, it remains unclear which algorithms actually perform best. Papers often use inconsistent benchmarks or weak baselines, making comparisons difficult. Moreover, researchers waste significant time re-implementing and re-running the same baselines for their papers.

To solve this, we are building BO Arena: an online benchmarking platform similar to LMArena or TabArena. The platform will serve three key roles:
1. A Live Leaderboard: Giving practitioners a clear view of the best methods for specific problems.
2. A Results Repository: Allowing researchers to download existing baseline results instead of re-running them.
3. A Problem Repository: Hosting a variety of challenging and realistic benchmark problems for researchers to quickly and easily test their algorithms on.

The long-term goal is to grow BO Arena into a widely used, well-maintained, and evolving open-source project for the BO community. Multiple students will work on this project, and researchers in industry have expressed potential interest in collaborating. Using the platform, we plan to perform large-scale analyses of various algorithms, culminating in a publication, an open-source codebase and a live website.

Prerequisites:
- Strong coding skills (including experience working with larger codebases).
- Familiarity with PyTorch.

Nice to have:
- Familiarity with Bayesian optimisation, Gaussian processes, etc. (or a strong interest in learning these).
- Experience running many jobs in parallel on a cluster.
- Knowledge of web development (for the online leaderboard).

Physics-Informed Gaussian Operators (Project or MSc)

Supervisor: Tim Weiland 

The year is 2025. Big tech companies churn out humongous deep learning models on what feels like a weekly basis. In physical applications, people are slowly buying into the idea that all they need is yet another stack of layers in their neural networks. In these dark times, one small and humble hero resists: the Gaussian process. Gaussian processes are naturally able to enforce physical conservation laws, and as such, are often marketed as a great fit for physical applications. Yet at the same time, it is undeniable that the ”amortization aspect” of neural operators (i.e. the property that they ”learn” from related simulations) is incredibly powerful. The goal of this project is to combine both of these properties in one,  through a deep, physics-informed Gaussian operator. Interested? Reach out to Tim for the details.

Prerequisites:
Solid prior knowledge in Probabilistic Machine Learning, particularly GPs
Prior experience in PyTorch / JAX / …; pick your poison

Interactive ODE Filter Visualisation (BSc)

Supervisor: Paul Fischer 

Ordinary differential equations (ODEs) are fundamental to modeling dynamical systems. However, most cannot be solved analytically and thus require numerical approximations. ODE filters are a recent class of probabilistic numerical method that reinterprets the problem of finding solutions to ODE initial value problems as a Bayesian inference task. This can then be solved using filtering and smoothing algorithms. Unlike classical solvers, ODE filters quantify uncertainty, making them particularly appealing for real-world applications. However, due to their recent development, there are not yet many accessible resources or intuitive visual explanations available. In this project, you will address this issue by developing an interactive ODE filter visualization similar to the following visualizations of Gaussian processes: ( https://smlbook.org/GP/  and https://www.infinitecuriosity.org/vizgp/ ). By doing so, you will contribute to this emerging research field, gain experience in data visualization, and learn about probabilistic machine learning.
  

Requirements:

  • Interest in probabilistic machine learning
  • Interest in data visualization

Optimal Structural Design with Probabilistic PDE Solvers (Project / MSc)

Supervisor: Bernardo Fichera

Airplane design requires selecting the appropriate structural configuration and materials. Here, appropriate means achieving the desired performance across the intended flight regime. From an aerodynamic perspective, performance is strongly influenced by the structural deformations of the aircraft in flight. The central design challenge, then, is: what structural configuration (e.g., wing length) and material properties (e.g., Young’s modulus) will produce the desired deformed shape during flight? Addressing this challenge amounts to solving the inverse aeroelastic problem -- that is, determining the structural and material design from the coupled interaction between aerodynamics and structural deformation. Both the structural and aerodynamic aspects of this problem involve solving PDEs. We aim to leverage probabilistic PDE solvers to tackle such inverse problems.

This approach offers two main advantages:

  • Natural incorporation of uncertainty quantification into the inverse solution.
  • A framework that recasts the inverse problem as a form of Bayesian optimization.
     

Applications of Practical Hessian Approximations in JAX (M.Sc. Project)

Supervisor: Joanna Sliwa

A Hessian matrix captures the second-order partial derivatives of a model’s loss function with respect to its parameters. While it provides valuable information for optimization, calibration, and uncertainty estimation, computing the exact Hessian is infeasible for modern deep networks. This project focuses on evaluating scalable Hessian approximations in JAX. Using the laplax library, we will conduct exploratory studies in some applications (depending on interest) such as continual and transfer learning, curvature-aware model merging and decomposition, and LoRA fine-tuning for large language models. The objective is to both showcase and extend the practical use cases of Hessian-based methods by contributing implementations of common applications directly into the laplax library.

Prerequisites:
> Prior experience with JAX
> Familiarity with neural network training and optimization
> Some exposure to second-order methods beneficial but not strictly required

Towards Full Deep Learning Hessians (B.Sc./ M.Sc.)

Supervisor: Andres Fernandez

The deep learning Hessian is a valuable and interesting linear operator, with applications spanning optimization, pruning, uncertainty quantification and loss landscape analysis. Unfortunately it is intractable for most neural networks, due to its gargantuan size, and practicioners must resort to tractable approximations that aren’t necessarily accurate (last-layer, diagonal, GGN, Kronecker). Instead, research suggests that deep learning Hessians may follow a low-rank structure. The aim of this project is to leverage recent advances in sketched methods from randomized linear algebra to obtain accurate low-rank approximations of Hessians at scale. This will involve working with deep learning setups and performing large-scale experiments on the cluster, comparing different Hessian approximations.

Prerequisites: 

> Python and affinity for scalable and maintainable code
> Rudiments of Linear algebra
> Rudiments of deep learning
> (Ideally) Rudiments of optimization (convex and non-convex)