Galerkin Meets Laplace: Fast Uncertainty Estimation in Neural PDEs
Abstract
The solution of partial differential equations (PDEs) by deep neural networks trained to satisfy the differential operator has become increasingly popular. While these approaches can lead to very accurate approximations, they tend to be over- confident and fail to capture the uncertainty around the approximation. In this work, we propose a Bayesian treatment to the deep Galerkin method (Sirignano & Spiliopoulos, 2018), a popular neural approach for solving parametric PDEs. In particular, we reinterpret the deep Galerkin method as the maximum a posteriori estimator corresponding to a likelihood term over a fictitious dataset, leading thus to a natural definition of a posterior. Then, we propose to model such posterior via the Laplace approximation, a fast approximation that allows us to capture mean- ingful uncertainty in out of domain interpolation of the PDE solution and in low data regimes with little overhead, as shown in our preliminary experiments.
Cite
Text
Beltran et al. "Galerkin Meets Laplace: Fast Uncertainty Estimation in Neural PDEs." ICLR 2024 Workshops: AI4DiffEqtnsInSci, 2024.Markdown
[Beltran et al. "Galerkin Meets Laplace: Fast Uncertainty Estimation in Neural PDEs." ICLR 2024 Workshops: AI4DiffEqtnsInSci, 2024.](https://mlanthology.org/iclrw/2024/beltran2024iclrw-galerkin/)BibTeX
@inproceedings{beltran2024iclrw-galerkin,
title = {{Galerkin Meets Laplace: Fast Uncertainty Estimation in Neural PDEs}},
author = {Beltran, Christian Jimenez and Vergari, Antonio and Teckentrup, Aretha L and Zygalakis, Konstantinos C.},
booktitle = {ICLR 2024 Workshops: AI4DiffEqtnsInSci},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/beltran2024iclrw-galerkin/}
}