Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

ICML 2020 pp. 11546-11555

Abstract

For deploying a deep learning model into production, it needs to be both accurate and compact to meet the latency and memory constraints. This usually results in a network that is deep (to ensure performance) and yet thin (to improve computational efficiency). In this paper, we propose an efficient method to train a deep thin network with a theoretic guarantee. Our method is motivated by model compression. It consists of three stages. First, we sufficiently widen the deep thin network and train it until convergence. Then, we use this well-trained deep wide network to warm up (or initialize) the original deep thin network. This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer. Finally, we further fine tune this already well-initialized deep thin network. The theoretical guarantee is established by using the neural mean field analysis. It demonstrates the advantage of our layerwise imitation approach over backpropagation. We also conduct large-scale empirical experiments to validate the proposed method. By training with our method, ResNet50 can outperform ResNet101, and BERT base can be comparable with BERT large, when ResNet101 and BERT large are trained under the standard training procedures as in the literature.

Cite

Text

Zhou et al. "Go Wide, Then Narrow: Efficient Training of Deep Thin Networks." International Conference on Machine Learning, 2020.

Markdown

[Zhou et al. "Go Wide, Then Narrow: Efficient Training of Deep Thin Networks." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/zhou2020icml-go/)

BibTeX

@inproceedings{zhou2020icml-go,
  title     = {{Go Wide, Then Narrow: Efficient Training of Deep Thin Networks}},
  author    = {Zhou, Denny and Ye, Mao and Chen, Chen and Meng, Tianjian and Tan, Mingxing and Song, Xiaodan and Le, Quoc and Liu, Qiang and Schuurmans, Dale},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {11546-11555},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/zhou2020icml-go/}
}