Limitations on Approximation by Deep and Shallow Neural Networks

Abstract

We prove Carl’s type inequalities for the error of approximation of compact sets K by deep and shallow neural networks. This in turn gives estimates from below on how well we can approximate the functions in K when requiring the approximants to come from outputs of such networks. Our results are obtained as a byproduct of the study of the recently introduced Lipschitz widths.

Cite

Text

Petrova and Wojtaszczyk. "Limitations on Approximation by Deep and Shallow Neural Networks." Journal of Machine Learning Research, 2023.

Markdown

[Petrova and Wojtaszczyk. "Limitations on Approximation by Deep and Shallow Neural Networks." Journal of Machine Learning Research, 2023.](https://mlanthology.org/jmlr/2023/petrova2023jmlr-limitations/)

BibTeX

@article{petrova2023jmlr-limitations,
  title     = {{Limitations on Approximation by Deep and Shallow Neural Networks}},
  author    = {Petrova, Guergana and Wojtaszczyk, Przemyslaw},
  journal   = {Journal of Machine Learning Research},
  year      = {2023},
  pages     = {1-38},
  volume    = {24},
  url       = {https://mlanthology.org/jmlr/2023/petrova2023jmlr-limitations/}
}