Project PANAMA, generously funded by the European Research Council (Starting Grant 757275), is a primary research focus of our research group.
A learning machine is a computer program that fits and refines a class of hypotheses onto data. What actually happens on the computer during this process is the solution of various numerical problems: Finding the lowest value on a high-dimensional surface (optimization) to find "best fits"; computing the volume under such a surface (integration) to assess uncertainty and confidence; or the simulation of dynamical descriptions of the world to predict how the environment of the learning agent may look in the near future and allow it to respond to these expected changes. Applied mathematics has developed algorithmic tools for these tasks for decades. But contemporary AI and machine learning have certain traits that break with the assumptions underlying these classic methods and make them unreliable and inefficient, or require tedious and resource-intensive tuning. In particular, the prominent role of (often large-scale) data in AI makes many computations extremely imprecise and unreliable. This is a principal reason for why machine learning currently requires exorbitant computational, energy, and labour resources.
Project PANAMA aims to develop new numerical algorithms that specifically address these challenges. The central mathematical idea underlying the project is that the computations underlying AI are themselves smaller, more elementary kinds of inference and learning problems. They can thus be phrased in the statistical language of machine learning, and the imprecisions and stochasticities caused by data sampling can be described with the mechanisms of probabilisty theory. Based on this mathematical foundation, known as probabilistic numerical computation, project PANAMA develops new functionality and new computational tools. Among other things, these methods are hoped to have less parameters in need of tuning (because the algorithm is able to infer its free parameters by itself), and produce meaningful uncertainty estimates alongside its principal output, which can be used to assess questions of reliability and safety.
The envisaged end result of the project are a number of new software tools that address key tasks of machine learning and artificial intelligence. By automating the algorithmic processes underlying AI and adding uncertainty as a key component of the computation itself, these tools will make AI easier and safer to use for a larger group of users.
The scientific results of this ongoing project can be found among most of our publications. Many of the software packages developed by the group are made possible by the financial support of this project.