Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Abstract
Sparse Neural Networks (NNs) can match the generalization of dense NNs using a fraction of the compute/storage for inference, and have the potential to enable efficient training. However, naively training unstructured sparse NNs from random initialization results in significantly worse generalization, with the notable exceptions of Lottery Tickets (LTs) and Dynamic Sparse Training (DST). In this work, we attempt to answer: (1) why training unstructured sparse networks from random initialization performs poorly and; (2) what makes LTs and DST the exceptions? We show that sparse NNs have poor gradient flow at initialization and propose a modified initialization for unstructured connectivity. Furthermore, we find that DST methods significantly improve gradient flow during training over traditional sparse training methods. Finally, we show that LTs do not improve gradient flow, rather their success lies in re-learning the pruning solution they are derived from — however, this comes at the cost of learning novel solutions.
Cite
Text
Evci et al. "Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I6.20611Markdown
[Evci et al. "Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/evci2022aaai-gradient/) doi:10.1609/AAAI.V36I6.20611BibTeX
@inproceedings{evci2022aaai-gradient,
title = {{Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win}},
author = {Evci, Utku and Ioannou, Yani and Keskin, Cem and Dauphin, Yann N.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {6577-6586},
doi = {10.1609/AAAI.V36I6.20611},
url = {https://mlanthology.org/aaai/2022/evci2022aaai-gradient/}
}