Re-Parameterizing Your Optimizers Rather than Architectures
Abstract
The well-designed structures in neural networks reflect the prior knowledge incorporated into the models. However, though different models have various priors, we are used to training them with model-agnostic optimizers such as SGD. In this paper, we propose to incorporate model-specific prior knowledge into optimizers by modifying the gradients according to a set of model-specific hyper-parameters. Such a methodology is referred to as Gradient Re-parameterization, and the optimizers are named RepOptimizers. For the extreme simplicity of model structure, we focus on a VGG-style plain model and showcase that such a simple model trained with a RepOptimizer, which is referred to as RepOpt-VGG, performs on par with or better than the recent well-designed models. From a practical perspective, RepOpt-VGG is a favorable base model because of its simple structure, high inference speed and training efficiency. Compared to Structural Re-parameterization, which adds priors into models via constructing extra training-time structures, RepOptimizers require no extra forward/backward computations and solve the problem of quantization. We hope to spark further research beyond the realms of model structure design. Code and models https://github.com/DingXiaoH/RepOptimizers.
Cite
Text
Ding et al. "Re-Parameterizing Your Optimizers Rather than Architectures." International Conference on Learning Representations, 2023.Markdown
[Ding et al. "Re-Parameterizing Your Optimizers Rather than Architectures." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/ding2023iclr-reparameterizing/)BibTeX
@inproceedings{ding2023iclr-reparameterizing,
title = {{Re-Parameterizing Your Optimizers Rather than Architectures}},
author = {Ding, Xiaohan and Chen, Honghao and Zhang, Xiangyu and Huang, Kaiqi and Han, Jungong and Ding, Guiguang},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/ding2023iclr-reparameterizing/}
}