Understanding the Gains from Repeated Self-Distillation

Abstract

Self-Distillation is a special type of knowledge distillation where the student model has the same architecture as the teacher model. Despite using the same architecture and the same training data, self-distillation has been empirically observed to improve performance, especially when applied repeatedly. For such a process, there is a fundamental question of interest: How much gain is possible by applying multiple steps of self-distillation? To investigate this relative gain, we propose using the simple but canonical task of linear regression. Our analysis shows that the excess risk achieved by multi-step self-distillation can significantly improve upon a single step of self-distillation, reducing the excess risk by a factor of $d$, where $d$ is the input dimension. Empirical results on regression tasks from the UCI repository show a reduction in the learnt model's risk (MSE) by up to $47$%.

Cite

Text

Pareek et al. "Understanding the Gains from Repeated Self-Distillation." Neural Information Processing Systems, 2024. doi:10.52202/079017-0249

Markdown

[Pareek et al. "Understanding the Gains from Repeated Self-Distillation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/pareek2024neurips-understanding/) doi:10.52202/079017-0249

BibTeX

@inproceedings{pareek2024neurips-understanding,
  title     = {{Understanding the Gains from Repeated Self-Distillation}},
  author    = {Pareek, Divyansh and Du, Simon S. and Oh, Sewoong},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0249},
  url       = {https://mlanthology.org/neurips/2024/pareek2024neurips-understanding/}
}