Minimax Statistical Learning with Wasserstein Distances

Abstract

As opposed to standard empirical risk minimization (ERM), distributionally robust optimization aims to minimize the worst-case risk over a larger ambiguity set containing the original empirical distribution of the training data. In this work, we describe a minimax framework for statistical learning with ambiguity sets given by balls in Wasserstein space. In particular, we prove generalization bounds that involve the covering number properties of the original ERM problem. As an illustrative example, we provide generalization guarantees for transport-based domain adaptation problems where the Wasserstein distance between the source and target domain distributions can be reliably estimated from unlabeled samples.

Cite

Text

Lee and Raginsky. "Minimax Statistical Learning with Wasserstein Distances." Neural Information Processing Systems, 2018.

Markdown

[Lee and Raginsky. "Minimax Statistical Learning with Wasserstein Distances." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/lee2018neurips-minimax/)

BibTeX

@inproceedings{lee2018neurips-minimax,
  title     = {{Minimax Statistical Learning with Wasserstein Distances}},
  author    = {Lee, Jaeho and Raginsky, Maxim},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {2687-2696},
  url       = {https://mlanthology.org/neurips/2018/lee2018neurips-minimax/}
}