Learning with Risk-Averse Feedback Under Potentially Heavy Tails

Abstract

We study learning algorithms that seek to minimize the conditional value-at-risk (CVaR), when all the learner knows is that the losses (and gradients) incurred may be heavy-tailed. We begin by studying a general-purpose estimator of CVaR for potentially heavy-tailed random variables, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest. With this estimator in hand, we then derive a new learning algorithm which robustly chooses among candidates produced by stochastic gradient-driven sub-processes, obtain excess CVaR bounds, and finally complement the theory with a regression application.

Cite

Text

Holland and Mehdi Haress. "Learning with Risk-Averse Feedback Under Potentially Heavy Tails." Artificial Intelligence and Statistics, 2021.

Markdown

[Holland and Mehdi Haress. "Learning with Risk-Averse Feedback Under Potentially Heavy Tails." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/holland2021aistats-learning/)

BibTeX

@inproceedings{holland2021aistats-learning,
  title     = {{Learning with Risk-Averse Feedback Under Potentially Heavy Tails}},
  author    = {Holland, Matthew and Mehdi Haress, El},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {892-900},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/holland2021aistats-learning/}
}