SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Abstract
Methods for improving deep neural network training times and model generalizability consist of various data augmentation, regularization, and optimization approaches, which tend to be sensitive to hyperparameter settings and make reproducibility more challenging. This work jointly considers two recent training strategies that address model generalizability: sharpness-aware minimization, and self-distillation, and proposes the novel training strategy of Sharpness-Aware Distilled Teachers (SADT). The experimental section of this work shows that SADT consistently outperforms previously published training strategies in model convergence time, test-time performance, and model generalizability over various neural architectures, datasets, and hyperparameter settings.
Cite
Text
Fahim and Boutellier. "SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization." NeurIPS 2022 Workshops: HITY, 2022.Markdown
[Fahim and Boutellier. "SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization." NeurIPS 2022 Workshops: HITY, 2022.](https://mlanthology.org/neuripsw/2022/fahim2022neuripsw-sadt/)BibTeX
@inproceedings{fahim2022neuripsw-sadt,
title = {{SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization}},
author = {Fahim, Masud An-Nur Islam and Boutellier, Jani},
booktitle = {NeurIPS 2022 Workshops: HITY},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/fahim2022neuripsw-sadt/}
}