On Universal Approximation and Error Bounds for Fourier Neural Operators
Abstract
Fourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the FNO, approximating operators associated with a Darcy type elliptic PDE and with the incompressible Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in terms of the reciprocal of the error. Thus, FNOs are shown to efficiently approximate operators arising in a large class of PDEs.
Cite
Text
Kovachki et al. "On Universal Approximation and Error Bounds for Fourier Neural Operators." Journal of Machine Learning Research, 2021.Markdown
[Kovachki et al. "On Universal Approximation and Error Bounds for Fourier Neural Operators." Journal of Machine Learning Research, 2021.](https://mlanthology.org/jmlr/2021/kovachki2021jmlr-universal/)BibTeX
@article{kovachki2021jmlr-universal,
title = {{On Universal Approximation and Error Bounds for Fourier Neural Operators}},
author = {Kovachki, Nikola and Lanthaler, Samuel and Mishra, Siddhartha},
journal = {Journal of Machine Learning Research},
year = {2021},
pages = {1-76},
volume = {22},
url = {https://mlanthology.org/jmlr/2021/kovachki2021jmlr-universal/}
}