Supervisor: Thomas Christie & Colin Doumont
Bayesian optimisation (BO) has evolved from toy problems to critical real-world applications. Tech giants like Meta (e.g. the Adaptive Experimentation team) and startups like Secondmind and BigHat Biosciences now rely on it for tasks ranging from electric motor design to molecular search.
Despite this progress, it remains unclear which algorithms actually perform best. Papers often use inconsistent benchmarks or weak baselines, making comparisons difficult. Moreover, researchers waste significant time re-implementing and re-running the same baselines for their papers.
To solve this, we are building BO Arena: an online benchmarking platform similar to LMArena or TabArena. The platform will serve three key roles:
1. A Live Leaderboard: Giving practitioners a clear view of the best methods for specific problems.
2. A Results Repository: Allowing researchers to download existing baseline results instead of re-running them.
3. A Problem Repository: Hosting a variety of challenging and realistic benchmark problems for researchers to quickly and easily test their algorithms on.
The long-term goal is to grow BO Arena into a widely used, well-maintained, and evolving open-source project for the BO community. Multiple students will work on this project, and researchers in industry have expressed potential interest in collaborating. Using the platform, we plan to perform large-scale analyses of various algorithms, culminating in a publication, an open-source codebase and a live website.
Prerequisites:
- Strong coding skills (including experience working with larger codebases).
- Familiarity with PyTorch.
Nice to have:
- Familiarity with Bayesian optimisation, Gaussian processes, etc. (or a strong interest in learning these).
- Experience running many jobs in parallel on a cluster.
- Knowledge of web development (for the online leaderboard).