Orthogonalized SGD and Nested Architectures for Anytime Neural Networks
Abstract
We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.
Cite
Text
Wan et al. "Orthogonalized SGD and Nested Architectures for Anytime Neural Networks." International Conference on Machine Learning, 2020.Markdown
[Wan et al. "Orthogonalized SGD and Nested Architectures for Anytime Neural Networks." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/wan2020icml-orthogonalized/)BibTeX
@inproceedings{wan2020icml-orthogonalized,
title = {{Orthogonalized SGD and Nested Architectures for Anytime Neural Networks}},
author = {Wan, Chengcheng and Hoffmann, Henry and Lu, Shan and Maire, Michael},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {9807-9817},
volume = {119},
url = {https://mlanthology.org/icml/2020/wan2020icml-orthogonalized/}
}