The Implicit Bias of Adam on Separable Data

Abstract

Adam has become one of the most favored optimizers in deep learning problems. Despite its success in practice, numerous mysteries persist regarding its theoretical understanding. In this paper, we study the implicit bias of Adam in linear logistic regression. Specifically, we show that when the training data are linearly separable, the iterates of Adam converge towards a linear classifier that achieves the maximum $\ell_\infty$-margin in direction. Notably, for a general class of diminishing learning rates, this convergence occurs within polynomial time. Our result shed light on the difference between Adam and (stochastic) gradient descent from a theoretical perspective.

Cite

Text

Zhang et al. "The Implicit Bias of Adam on Separable Data." Neural Information Processing Systems, 2024. doi:10.52202/079017-0756

Markdown

[Zhang et al. "The Implicit Bias of Adam on Separable Data." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhang2024neurips-implicit/) doi:10.52202/079017-0756

BibTeX

@inproceedings{zhang2024neurips-implicit,
  title     = {{The Implicit Bias of Adam on Separable Data}},
  author    = {Zhang, Chenyang and Zou, Difan and Cao, Yuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0756},
  url       = {https://mlanthology.org/neurips/2024/zhang2024neurips-implicit/}
}