Supervisors: Marius Zeinhofer (ETH Zürich), Felix Dangel (Vector Institute), Lukas Tatzel
The loss landscape of physics-informed neural networks (PINNs) is notoriously hard to navigate for first-order optimizers like SGD and Adam. Second-order methods can significantly outperform them on small- and medium-sized problems. In particular, the Hessian-free optimizer represents a strong baseline that requires almost no tuning. However, as the problem size grows, the Hessian-free optimizer only achieves a few hundred steps within a given budget, diminishing its benefits. The goal of this project is to accelerate the Hessian-free optimizer through three aspects: (i) numerical tricks to speed up matrix-vector products and enable efficient pre-conditioning, (ii) revisiting recommendations from the seminal work, and (iii) correcting for a recently discovered bias in mini-batch quadratics.
Prerequisites:
- Experience with PyTorch and numerical optimization
- Interest in PDEs and automatic differentiation