Size Lowerbounds for Deep Operator Networks

Abstract

Deep Operator Networks are an increasingly popular paradigm for solving regression in infinite dimensions and hence solve families of PDEs in one shot. In this work, we aim to establish a first-of-its-kind data-dependent lowerbound on the size of DeepONets required for them to be able to reduce empirical error on noisy data. In particular, we show that for low training errors to be obtained on $n$ data points it is necessary that the common output dimension of the branch and the trunk net be scaling as $\Omega \left ( \sqrt[\leftroot{-1}\uproot{-1}4]{n} \right )$. This inspires our experiments with DeepONets solving the advection-diffusion-reaction PDE, where we demonstrate the possibility that at a fixed model size, to leverage increase in this common output dimension and get monotonic lowering of training error, the size of the training data might necessarily need to scale at least quadratically with it.

Cite

Text

Mukherjee and Roy. "Size Lowerbounds for Deep Operator Networks." Transactions on Machine Learning Research, 2024.

Markdown

[Mukherjee and Roy. "Size Lowerbounds for Deep Operator Networks." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/mukherjee2024tmlr-size/)

BibTeX

@article{mukherjee2024tmlr-size,
  title     = {{Size Lowerbounds for Deep Operator Networks}},
  author    = {Mukherjee, Anirbit and Roy, Amartya},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/mukherjee2024tmlr-size/}
}