Optimizer Amalgamation

Abstract

Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of "teacher" optimizers into a single "student" optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of "learning to optimize" to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations.

Cite

Text

Huang et al. "Optimizer Amalgamation." International Conference on Learning Representations, 2022.

Markdown

[Huang et al. "Optimizer Amalgamation." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/huang2022iclr-optimizer/)

BibTeX

@inproceedings{huang2022iclr-optimizer,
  title     = {{Optimizer Amalgamation}},
  author    = {Huang, Tianshu and Chen, Tianlong and Liu, Sijia and Chang, Shiyu and Amini, Lisa and Wang, Zhangyang},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/huang2022iclr-optimizer/}
}