An SDE for Modeling SAM: Theory and Insights
Abstract
We study the SAM (Sharpness-Aware Minimization) optimizer which has recently attracted a lot of interest due to its increased performance over more classical variants of stochastic gradient descent. Our main contribution is the derivation of continuous-time models (in the form of SDEs) for SAM and two of its variants, both for the full-batch and mini-batch settings. We demonstrate that these SDEs are rigorous approximations of the real discrete-time algorithms (in a weak sense, scaling linearly with the learning rate). Using these models, we then offer an explanation of why SAM prefers flat minima over sharp ones – by showing that it minimizes an implicitly regularized loss with a Hessian-dependent noise structure. Finally, we prove that SAM is attracted to saddle points under some realistic conditions. Our theoretical results are supported by detailed experiments.
Cite
Text
Monzio Compagnoni et al. "An SDE for Modeling SAM: Theory and Insights." International Conference on Machine Learning, 2023.Markdown
[Monzio Compagnoni et al. "An SDE for Modeling SAM: Theory and Insights." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/monziocompagnoni2023icml-sde/)BibTeX
@inproceedings{monziocompagnoni2023icml-sde,
title = {{An SDE for Modeling SAM: Theory and Insights}},
author = {Monzio Compagnoni, Enea and Biggio, Luca and Orvieto, Antonio and Proske, Frank Norbert and Kersting, Hans and Lucchi, Aurelien},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {25209-25253},
volume = {202},
url = {https://mlanthology.org/icml/2023/monziocompagnoni2023icml-sde/}
}