Understanding Simultaneous Train and Test Robustness

Abstract

This work concerns the study of robust learning algorithms. In practical settings, it is desirable to achieve robustness to many different types of corruptions and shifts in the data distribution such as defending against adversarial examples, dealing with covariate shifts, and contamination of training data (data poisoning). While there has been extensive recent work on these topics, models and algorithms for these different notions of robustness have been largely developed in isolation. In this paper, we propose a natural notion of robustness that allows us to simultaneously reason about train-time and test-time corruptions, that can be measured using various distance metrics (e.g., total variation distance, Wasserstein distance). We study our proposed notion in three fundamental settings in supervised and unsupervised learning (of regression, classification and mean estimation). In each case we design sample and time-efficient learning algorithms with strong simultaneous train-and-test robustness guarantees. In particular, our work shows that the two seemingly different notions of robustness at train-time and test-time are closely related, and this connection can be leveraged to develop algorithmic techniques that are applicable in both the settings.

Cite

Text

Awasthi et al. "Understanding Simultaneous Train and Test Robustness." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.

Markdown

[Awasthi et al. "Understanding Simultaneous Train and Test Robustness." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.](https://mlanthology.org/alt/2022/awasthi2022alt-understanding/)

BibTeX

@inproceedings{awasthi2022alt-understanding,
  title     = {{Understanding Simultaneous Train and Test Robustness}},
  author    = {Awasthi, Pranjal and Balakrishnan, Sivaraman and Vijayaraghavan, Aravindan},
  booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory},
  year      = {2022},
  pages     = {34-69},
  volume    = {167},
  url       = {https://mlanthology.org/alt/2022/awasthi2022alt-understanding/}
}