An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks
Abstract
It is well known that modern deep neural networks are powerful enough to memorize datasets even when the labels have been randomized. Recently, Vershynin(2020) settled a long standing question by Baum(1988), proving that deep threshold networks can memorize $n$ points in $d$ dimensions using $\widetilde{\mathcal{O}}(e^{1/\delta^2}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(e^{1/\delta^2}(d+\sqrt{n})+n)$ weights, where $\delta$ is the minimum distance between the points. In this work, we improve the dependence on $\delta$ from exponential to almost linear, proving that $\widetilde{\mathcal{O}}(\frac{1}{\delta}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(\frac{d}{\delta}+n)$ weights are sufficient. Our construction uses Gaussian random weights only in the first layer, while all the subsequent layers use binary or integer weights. We also prove new lower bounds by connecting memorization in neural networks to the purely geometric problem of separating $n$ points on a sphere using hyperplanes.
Cite
Text
Rajput et al. "An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks." Neural Information Processing Systems, 2021.Markdown
[Rajput et al. "An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/rajput2021neurips-exponential/)BibTeX
@inproceedings{rajput2021neurips-exponential,
title = {{An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks}},
author = {Rajput, Shashank and Sreenivasan, Kartik and Papailiopoulos, Dimitris and Karbasi, Amin},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/rajput2021neurips-exponential/}
}