Smoothed Embeddings for Certified Few-Shot Learning

Abstract

Randomized smoothing is considered to be the state-of-the-art provable defense against adversarial perturbations. However, it heavily exploits the fact that classifiers map input objects to class probabilities and do not focus on the ones that learn a metric space in which classification is performed by computing distances to embeddings of class prototypes. In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings. We provide analysis of the Lipschitz continuity of such models and derive a robustness certificate against $\ell_2$-bounded perturbations that may be useful in few-shot learning scenarios. Our theoretical results are confirmed by experiments on different datasets.

Cite

Text

Pautov et al. "Smoothed Embeddings for Certified Few-Shot Learning." Neural Information Processing Systems, 2022.

Markdown

[Pautov et al. "Smoothed Embeddings for Certified Few-Shot Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/pautov2022neurips-smoothed/)

BibTeX

@inproceedings{pautov2022neurips-smoothed,
  title     = {{Smoothed Embeddings for Certified Few-Shot Learning}},
  author    = {Pautov, Mikhail and Kuznetsova, Olesya and Tursynbek, Nurislam and Petiushko, Aleksandr and Oseledets, Ivan},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/pautov2022neurips-smoothed/}
}