Distribution Learnability and Robustness

Abstract

We examine the relationship between learnability and robust learnability for the problem of distribution learning.We show that learnability implies robust learnability if the adversary can only perform additive contamination (and consequently, under Huber contamination), but not if the adversary is allowed to perform subtractive contamination. Thus, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability does not imply agnostic learnability. We also explore related implications in the context of compression schemes and differentially private learnability.

Cite

Text

Ben-David et al. "Distribution Learnability and Robustness." Neural Information Processing Systems, 2023.

Markdown

[Ben-David et al. "Distribution Learnability and Robustness." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/bendavid2023neurips-distribution/)

BibTeX

@inproceedings{bendavid2023neurips-distribution,
  title     = {{Distribution Learnability and Robustness}},
  author    = {Ben-David, Shai and Bie, Alex and Kamath, Gautam and Lechner, Tosca},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/bendavid2023neurips-distribution/}
}