Meet the Fellows

Fellows 2024

Fafamé Edwige Akpoly
Benin

Supervisor: Kira Rehfeld


Stephen Kiilu
Kenya

Supervisor: Carsten Eickhoff


Immaculate Wanjiru Kimani
Kenya

Supervisor: Dominik Papies


Pauline Ornela Megne Choudja
Cameroon

Supervisor: Thomas Küstner


Berthine Nyunga Mpinda
Democratic Republic of Congo

Supervisor: Hendrik Lensch

Fellows 2023

Amel Abdulraheem

Amel Abdulraheem
Sudan

Supervisor: Christian Baumgartner

Albert Agisha

Albert Agisha
Democratic Republic of the Congo

Supervisor: Ulrike von Luxburg

Machine learning and statistics for climate networks
In this project, we investigate novel methods to assess the uncertainty in climate network construction and estimation procedures. The goal will be to create uncertainty estimates, not by network subsampling, but by bootstrapping the underlying time series directly. Different approaches will be compared, both on a theoretical and implementation level and connected to background knowledge in climate science.

Ifeoma Veronica Nwabufo

Ifeoma Veronica Nwabufo
Nigeria

Supervisor: Philipp Berens

Unsupervised learning in medical data science
The project will look at unsupervised learning methods recently developed in the lab to embed medical images via contrastive learning. It will explore the resulting embedding, study which data transformations are important and whether it can be related to disease/demographic factors.

Mendrika Rakotomanga

Mendrika Rakotomanga
Madagascar

Supervisor: Bernhard Schölkopf

Bayesian inference of gravitational waves with machine learning
This project is concerned with the evaluation and development of machine learning methods for gravitational wave inference. One focus is the analysis of current methods with regard to possible biases and their impact on large-scale analyses. In this context, the project also aims to develop methods to mitigate possible inaccuracies.

Fenosoa Randrianjatovo

Fenosoa Randrianjatovo
Madagascar

Supervisor: Claire Vernade

About random strategies in (non-stationary) reinforcement learning
Reinforcement learning (RL) algorithms have so far been developed mainly for steady-state environments and have difficulty adapting when the system dynamics or the reward function changes. Posterior sampling and Thompson sampling were identified early on as efficient strategies in RL, in part due to their randomised strategies. While some algorithms such as UCRL have recently been adapted to non-stationary environments, no randomised strategy has been proposed yet. In this project, we aim to propose a randomised -and more practical-  algorithm that builds on posterior sampling and is capable of achieving sublinear regret. We will start by studying a (possibly context-dependent) bandit problem and then extend our findings to more complex RL models.

 

Fellows 2022

Bolaji Bamiro


| We are mourning the loss of Bolaji who tragically died in a car accident on September 19,  2023. Our thoughts are with Bolaji's family and friends. We fondly remember our time with her. |
 

Faisal Mohamed

Tatenda Emma Matika

Tshenolo Thato Daumas

Wafaa Mohammed