Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs
Abstract
Neural operators are a type of deep architecture that learns to solve (i.e. learns the nonlinear solution operator of) partial differential equations (PDEs). The current state of the art for these models does not provide explicit uncertainty quantification. This is arguably even more of a problem for this kind of tasks than elsewhere in machine learning, because the dynamical systems typically described by PDEs often exhibit subtle, multiscale structure that makes errors hard to spot by humans. In this work, we first provide a mathematically detailed Bayesian formulation of the ``shallow'' (linear) version of neural operators in the formalism of Gaussian processes. We then extend this analytic treatment to general deep neural operators—specifically, graph neural operators—using approximate methods from Bayesian deep learning, enabling them to incorporate uncertainty quantification. As a result, our approach is able to identify cases, and provide structured uncertainty estimates, where the neural operator fails to predict well.
Cite
Text
Magnani et al. "Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs." Transactions on Machine Learning Research, 2025.Markdown
[Magnani et al. "Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/magnani2025tmlr-approximate/)BibTeX
@article{magnani2025tmlr-approximate,
title = {{Approximate Bayesian Neural Operators: Uncertainty Quantification for Parametric PDEs}},
author = {Magnani, Emilia and Krämer, Nicholas and Eschenhagen, Runa and Rosasco, Lorenzo and Hennig, Philipp},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/magnani2025tmlr-approximate/}
}