Spectral Risk-Based Learning Using Unbounded Losses

Abstract

In this work, we consider the setting of learning problems under a wide class of spectral risk (or "L-risk") functions, where a Lipschitz-continuous spectral density is used to flexibly assign weight to extreme loss values. We obtain excess risk guarantees for a derivative-free learning procedure under unbounded heavy-tailed loss distributions, and propose a computationally efficient implementation which empirically outperforms traditional risk minimizers in terms of balancing spectral risk and misclassification error.

Cite

Text

Holland and Mehdi Haress. "Spectral Risk-Based Learning Using Unbounded Losses." Artificial Intelligence and Statistics, 2022.

Markdown

[Holland and Mehdi Haress. "Spectral Risk-Based Learning Using Unbounded Losses." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/holland2022aistats-spectral/)

BibTeX

@inproceedings{holland2022aistats-spectral,
  title     = {{Spectral Risk-Based Learning Using Unbounded Losses}},
  author    = {Holland, Matthew J. and Mehdi Haress, El},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {1871-1886},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/holland2022aistats-spectral/}
}