attempto online
02.02.2024
Comprehensive resource management and data economy for artificial intelligence
AI researcher Philipp Hennig receives ERC Consolidator Grant
Over the next five years, Tübingen-based AI researcher Philipp Hennig will develop methods to manage computing time and data volumes more efficiently in machine learning. This project has earned him a Consolidator Grant from the European Research Council (ERC). The project "Advanced Numerical Uncertainty for Bayesian Inference in Science" or ANUBIS will receive funding of some two million euros over a period of five years. Philipp Hennig is Professor of Machine Learning Methods in the Department of Informatics at the University of Tübingen and is a member of the board of the Tübingen AI Center and the cluster of excellence Machine Learning: New Perspectives for Science.
ANUBIS objectives
The ANUBIS project aims to manage the computing resources of scientific AI applications comprehensively and consistently. In climate models, geological and neuroscientific simulations, many inference problems occur in which information or a data set is used to indirectly deduce or “infer” the information that is actually being sought. "To solve such inference problems, we need large amounts of data and large computing resources," Hennig explains. "A single partial data set from climate research, for example, can be as large as 100 terabytes (i.e. 100,000 gigabytes)."
Data processing challenges
In the project, what a computer does – computing – is regarded as a source of information. This information relates to questions that cannot be answered definitively and perfectly (such as "What will the weather be like in 10 years' time?"). Such questions usually consist of an infinite number of sub-questions that are not fully answered by the calculation. Therefore, while the computer is answering these sub-questions, it should simultaneously keep a record of which parts of the question have already been answered and how well. This is not so easy, because the record-keeping itself is also a calculation. For the whole thing to remain feasible, the record-keeping must therefore be simpler than the calculation itself.
Innovative approaches to solutions
To do this, it will be necessary to extend modern machine learning methods so that their functionality harmonizes with the new concept. The advantages of the extended methodology are not only more economical algorithms, but also new functionality. "It will then be easier for researchers from the geosciences, climate sciences and neurosciences, for example, to flexibly incorporate very different types of data such as specific measurements, simulation data and expert knowledge into their code," says Hennig. "The uncertainty of the information from the different data sources must be quantified. Neither was possible with the previous methods."
Claudia Brusdeylins, Tübingen AI Center / University of Tübingen