Robust Estimation via Generative Adversarial Networks

Abstract

Robust estimation under Huber's $\epsilon$-contamination model has become an important topic in statistics and theoretical computer science. Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning. Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's $\epsilon$-contamination model. Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation.

Cite

Text

Gao et al. "Robust Estimation via Generative Adversarial Networks." International Conference on Learning Representations, 2019.

Markdown

[Gao et al. "Robust Estimation via Generative Adversarial Networks." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/gao2019iclr-robust/)

BibTeX

@inproceedings{gao2019iclr-robust,
  title     = {{Robust Estimation via Generative Adversarial Networks}},
  author    = {Gao, Chao and Liu, Jiyi and Yao, Yuan and Zhu, Weizhi},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/gao2019iclr-robust/}
}