Robust Active Distillation

Abstract

Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available. In large-scale applications, however, the teacher tends to provide a large number of incorrect soft-labels that impairs student performance. The sheer size of the teacher additionally constrains the number of soft-labels that can be queried due to prohibitive computational and/or financial costs. The difficulty in achieving simultaneous \emph{efficiency} (i.e., minimizing soft-label queries) and \emph{robustness} (i.e., avoiding student inaccuracies due to incorrect labels) hurts the widespread application of knowledge distillation to many modern tasks. In this paper, we present a parameter-free approach with provable guarantees to query the soft-labels of points that are simultaneously informative and correctly labeled by the teacher. At the core of our work lies a game-theoretic formulation that explicitly considers the inherent trade-off between the informativeness and correctness of input instances. We establish bounds on the expected performance of our approach that hold even in worst-case distillation instances. We present empirical evaluations on popular benchmarks that demonstrate the improved distillation performance enabled by our work relative to that of state-of-the-art active learning and active distillation methods.

Cite

Text

Baykal et al. "Robust Active Distillation." International Conference on Learning Representations, 2023.

Markdown

[Baykal et al. "Robust Active Distillation." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/baykal2023iclr-robust/)

BibTeX

@inproceedings{baykal2023iclr-robust,
  title     = {{Robust Active Distillation}},
  author    = {Baykal, Cenk and Trinh, Khoa and Iliopoulos, Fotis and Menghani, Gaurav and Vee, Erik},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/baykal2023iclr-robust/}
}