Masked Training of Neural Networks with Partial Gradients
Abstract
State-of-the-art training algorithms for deep learning models are based on stochastic gradient descent (SGD). Recently, many variations have been explored: perturbing parameters for better accuracy (such as in Extragradient), limiting SGD updates to a subset of parameters for increased efficiency (such as meProp) or a combination of both (such as Dropout). However, the convergence of these methods is often not studied in theory. We propose a unified theoretical framework to study such SGD variants—encompassing the aforementioned algorithms and additionally a broad variety of methods used for communication efficient training or model compression. Our insights can be used as a guide to improve the efficiency of such methods and facilitate generalization to new applications. As an example, we tackle the task of jointly training networks, a version of which (limited to sub-networks) is used to create Slimmable Networks. By training a low-rank Transformer jointly with a standard one we obtain superior performance than when it is trained separately.
Cite
Text
Mohtashami et al. "Masked Training of Neural Networks with Partial Gradients." Artificial Intelligence and Statistics, 2022.Markdown
[Mohtashami et al. "Masked Training of Neural Networks with Partial Gradients." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/mohtashami2022aistats-masked/)BibTeX
@inproceedings{mohtashami2022aistats-masked,
title = {{Masked Training of Neural Networks with Partial Gradients}},
author = {Mohtashami, Amirkeivan and Jaggi, Martin and Stich, Sebastian},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {5876-5890},
volume = {151},
url = {https://mlanthology.org/aistats/2022/mohtashami2022aistats-masked/}
}