Supervisor: Thomas Christie
A central promise of probabilistic approaches to regression is the ability to make well-informed decisions based on uncertainty. At the core of typical approaches to such decision-making tasks lies a surrogate model for the function(s) of interest, which yields a distribution over function values at each point in the domain. In recent years, a plethora of model classes and approximations within these classes have been proposed, with examples including Gaussian processes (GPs), Bayesian neural networks (BNNs) and neural processes.
This has led to an increasingly complicated landscape, which is difficult for people within the field of machine learning to assess, let alone the practitioners we would like to use our models! The goal of this project is to provide a central benchmark for evaluating probabilistic models in the context of regression. We would initially focus on obtaining a standardised setup for the UCI benchmark, with a leaderboard, then add some challenging, (ideally) real-world Bayesian optimisation (BO) problems. We will then use the results to answer some questions of interest. Contact Thomas for more details.
Prerequisites:
- Familiarity with PyTorch, and more generally experience with working with large codebases.
- Nice to have - familiarity with at least one of GPs, BNNs, or BO.