No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models

Abstract

Recent research has shown the existence of significant redundancy in large Transformer models. One can prune the redundant parameters without significantly sacrificing the generalization performance. However, we question whether the redundant parameters could have contributed more if they were properly trained. To answer this question, we propose a novel training strategy that encourages all parameters to be trained sufficiently. Specifically, we adaptively adjust the learning rate for each parameter according to its sensitivity, a robust gradient-based measure reflecting this parameter's contribution to the model performance. A parameter with low sensitivity is redundant, and we improve its fitting by increasing its learning rate. In contrast, a parameter with high sensitivity is well-trained, and we regularize it by decreasing its learning rate to prevent further overfitting. We conduct extensive experiments on natural language understanding, neural machine translation, and image classification to demonstrate the effectiveness of the proposed schedule. Analysis shows that the proposed schedule indeed reduces the redundancy and improves generalization performance.

Cite

Text

Liang et al. "No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models." International Conference on Learning Representations, 2022.

Markdown

[Liang et al. "No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/liang2022iclr-parameters/)

BibTeX

@inproceedings{liang2022iclr-parameters,
  title     = {{No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models}},
  author    = {Liang, Chen and Jiang, Haoming and Zuo, Simiao and He, Pengcheng and Liu, Xiaodong and Gao, Jianfeng and Chen, Weizhu and Zhao, Tuo},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/liang2022iclr-parameters/}
}