AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method

Abstract

We propose a computationally-friendly adaptive learning rate schedule, ``AdaLoss", which directly uses the information of the loss function to adjust the stepsize in gradient descent methods. We prove that this schedule enjoys linear convergence in linear regression. Moreover, we extend the to the non-convex regime, in the context of two-layer over-parameterized neural networks. If the width is sufficiently large (polynomially), then AdaLoss converges robustly to the global minimum in polynomial time. We numerically verify the theoretical results and extend the scope of the numerical experiments by considering applications in LSTM models for text clarification and policy gradients for control problems.

Cite

Text

Wu et al. "AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I8.20848

Markdown

[Wu et al. "AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/wu2022aaai-adaloss/) doi:10.1609/AAAI.V36I8.20848

BibTeX

@inproceedings{wu2022aaai-adaloss,
  title     = {{AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method}},
  author    = {Wu, Xiaoxia and Xie, Yuege and Du, Simon Shaolei and Ward, Rachel A.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {8691-8699},
  doi       = {10.1609/AAAI.V36I8.20848},
  url       = {https://mlanthology.org/aaai/2022/wu2022aaai-adaloss/}
}