An Initial Alignment Between Neural Network and Target Is Needed for Gradient Descent to Learn
Abstract
This paper introduces the notion of “Initial Alignment” (INAL) between a neural network at initialization and a target function. It is proved that if a network and a Boolean target function do not have a noticeable INAL, then noisy gradient descent with normalized i.i.d. initialization will not learn in polynomial time. Thus a certain amount of knowledge about the target (measured by the INAL) is needed in the architecture design. This also provides an answer to an open problem posed in (AS-NeurIPS’20). The results are based on deriving lower-bounds for descent algorithms on symmetric neural networks without explicit knowledge of the target function beyond its INAL.
Cite
Text
Abbe et al. "An Initial Alignment Between Neural Network and Target Is Needed for Gradient Descent to Learn." International Conference on Machine Learning, 2022.Markdown
[Abbe et al. "An Initial Alignment Between Neural Network and Target Is Needed for Gradient Descent to Learn." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/abbe2022icml-initial/)BibTeX
@inproceedings{abbe2022icml-initial,
title = {{An Initial Alignment Between Neural Network and Target Is Needed for Gradient Descent to Learn}},
author = {Abbe, Emmanuel and Cornacchia, Elisabetta and Hazla, Jan and Marquis, Christopher},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {33-52},
volume = {162},
url = {https://mlanthology.org/icml/2022/abbe2022icml-initial/}
}