Learn to Approximate. Approximate to Learn.

ProbNum is a Python toolkit for solving numerical problems in linear algebra, optimization, quadrature and differential equations. ProbNum solvers not only estimate the solution of the numerical problem, but also its uncertainty (numerical error) which arises from finite computational resources, discretization and stochastic input. This numerical uncertainty can be used in downstream decisions.

We are looking for motivated students familiar with programming in Python and an avid interest in scientific software development, machine learning or numerical methods. Your tasks may include implementation of new algorithms, refactoring of existing code, writing tests or benchmarking methods. If you are interested, please contact Jonathan Wenger

Workhours: 32 - 40h/ month

Timeframe: at least 3 months, ideally 1 semester

Benchmarking Linear Solvers for Machine Learning

In order to compare linear solvers with respect to the specific linear systems which arise in machine learning, a benchmark suite of linear problems needs to be established. Such a list should include among others noisy systems (stochastic quadratic programs, empirical risk minimization), large scale systems (N > 10,0000), structured systems (sparsity, block diagonality) and systems with generative prior information (kernel Gram matrices). The student's task will be to establish such a benchmark based on interesting current research topics in ML and to evaluate existing linear solvers.

Automatic Differentiation Backend

Many methods in ProbNum, such as numerical integration and the solution of differential equations, require derivatives of a given objective function. Computing the derivative(s) of complicated functions by hand quickly becomes tedious. In response, many machine learning frameworks (such as PyTorch, TensorFlow or JAX) use automatic differentiation. In order to interface with these frameworks and to be able to differentiate through the numerical methods in ProbNum, a backend for an arbitrary automatic differentiation framework should be prototyped and eventually implemented.

Efficient Kernel Operations

Many machine learning models, such as Gaussian processes, internally invert a kernel matrix. This matrix is defined as evaluating the kernel pairwise for all N datapoints resulting in a NxN matrix. Inverting an NxN matrix naively has cubic cost O(N^3), which can quickly become prohibitive. There are fast numerical methods which just rely on matrix-vector multiplication with the kernel matrix to compute its inverse. As it turns out, a matrix-vector product with a kernel matrix can be computed very efficiently using frameworks such as KeOps. In this project the possible speed-up when integrating KeOps into ProbNum will be evaluated.