Provable Memorization via Deep Neural Networks Using Sub-Linear Parameters
Abstract
It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs. By exploiting depth, we show that $O(N^{2/3})$ parameters suffice to memorize $N$ pairs, under a mild condition on the separation of input points. In particular, deeper networks (even with width 3) are shown to memorize more pairs than shallow networks, which also agrees with the recent line of works on the benefits of depth for function approximation. We also provide empirical results that support our theoretical findings.
Cite
Text
Park et al. "Provable Memorization via Deep Neural Networks Using Sub-Linear Parameters." Conference on Learning Theory, 2021.Markdown
[Park et al. "Provable Memorization via Deep Neural Networks Using Sub-Linear Parameters." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/park2021colt-provable/)BibTeX
@inproceedings{park2021colt-provable,
title = {{Provable Memorization via Deep Neural Networks Using Sub-Linear Parameters}},
author = {Park, Sejun and Lee, Jaeho and Yun, Chulhee and Shin, Jinwoo},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {3627-3661},
volume = {134},
url = {https://mlanthology.org/colt/2021/park2021colt-provable/}
}