Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation
Abstract
The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width $w^*_{\min}=\max(d_x,d_y)$, where $d_x$ and $d_y$ are the dimensions of the input and output, respectively. Recently, (Cai, 2022) shows that a leaky-ReLU NN with this critical width can achieve UAP for $L^p$ functions on a compact domain $\mathcal{K}$, i.e., the UAP for $L^p(\mathcal{K},\mathbb{R}^{d_y})$. This paper examines a uniform UAP for the function class $C(\mathcal{K},\mathbb{R}^{d_y})$ and gives the exact minimum width of the leaky-ReLU NN as $w_{\min}=\max(d_x+1,d_y)+1_{d_y=d_x+1}$, which involves the effects of the output dimensions. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory.
Cite
Text
Li et al. "Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation." International Conference on Machine Learning, 2023.Markdown
[Li et al. "Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/li2023icml-minimum/)BibTeX
@inproceedings{li2023icml-minimum,
title = {{Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation}},
author = {Li, Li’Ang and Duan, Yifei and Ji, Guanghua and Cai, Yongqiang},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {19460-19470},
volume = {202},
url = {https://mlanthology.org/icml/2023/li2023icml-minimum/}
}