Generic Bounds on the Approximation Error for Physics-Informed (and) Operator Learning
Abstract
We propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution-operator of generic partial differential equations (PDEs). Our framework utilizes existing neural network approximation results to obtain bounds on more-involved learning architectures for PDEs. We illustrate the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs (and physics-informed DeepONets and FNOs) mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs.
Cite
Text
De Ryck and Mishra. "Generic Bounds on the Approximation Error for Physics-Informed (and) Operator Learning." Neural Information Processing Systems, 2022.Markdown
[De Ryck and Mishra. "Generic Bounds on the Approximation Error for Physics-Informed (and) Operator Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ryck2022neurips-generic/)BibTeX
@inproceedings{ryck2022neurips-generic,
title = {{Generic Bounds on the Approximation Error for Physics-Informed (and) Operator Learning}},
author = {De Ryck, Tim and Mishra, Siddhartha},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ryck2022neurips-generic/}
}