Supervisor: Tobias Weber
Deep Learning holds great promise for scientific applications, particularly in solving and propagating time-dependent partial differential equations. However, it has yet to achieve broad real-world applicability, primarily due to challenges with accuracy and out-of-distribution robustness when compared to traditional numerical methods. While neural networks can offer speed as a significant advantage, they often fall short in precision, limiting their utility. A promising application, however, arises in scenarios where computational speed is prioritized over absolute accuracy.
In this Bachelor thesis, you will investigate a [recent approach](https://arxiv.org/abs/2405.01355) that leverages a neural network to provide coarse predictions, which in turn guide the fine-scale computations of a numerical method. A first step will be to derive a simple toy experiment generalizing away from the specific domain of the referenced paper. This experiment will serve as a platform to explore and evaluate various training setups for the proposed task.
Prerequisites:
- Basic understanding of partial differential equations and scientific computing
- Proficiency in Jax