Nonparametric Neural Networks
Abstract
Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and training many networks from scratch. In this paper, we address the problem of automatically finding a good network size during a single training cycle. We introduce *nonparametric neural networks*, a non-probabilistic framework for conducting optimization over all possible network sizes and prove its soundness when network growth is limited via an L_p penalty. We train networks under this framework by continuously adding new units while eliminating redundant units via an L_2 penalty. We employ a novel optimization algorithm, which we term *adaptive radial-angular gradient descent* or *AdaRad*, and obtain promising results.
Cite
Text
Philipp and Carbonell. "Nonparametric Neural Networks." International Conference on Learning Representations, 2017.Markdown
[Philipp and Carbonell. "Nonparametric Neural Networks." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/philipp2017iclr-nonparametric/)BibTeX
@inproceedings{philipp2017iclr-nonparametric,
title = {{Nonparametric Neural Networks}},
author = {Philipp, George and Carbonell, Jaime G.},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/philipp2017iclr-nonparametric/}
}