Uniform Convergence of Rank-Weighted Learning

Abstract

The decision-theoretic foundations of classical machine learning models have largely focused on estimating model parameters that minimize the expectation of a given loss function. However, as machine learning models are deployed in varied contexts, such as in high-stakes decision-making and societal settings, it is clear that these models are not just evaluated by their average performances. In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning. These L-Risks, induced by rank-dependent weighting functions with bounded variation, is a unification of popular risk measures such as conditional value-at-risk and those defined by cumulative prospect theory. We give uniform convergence bounds of this broad class of risk measures and study their consequences on a logistic regression example.

Cite

Text

Khim et al. "Uniform Convergence of Rank-Weighted Learning." International Conference on Machine Learning, 2020.

Markdown

[Khim et al. "Uniform Convergence of Rank-Weighted Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/khim2020icml-uniform/)

BibTeX

@inproceedings{khim2020icml-uniform,
  title     = {{Uniform Convergence of Rank-Weighted Learning}},
  author    = {Khim, Justin and Leqi, Liu and Prasad, Adarsh and Ravikumar, Pradeep},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {5254-5263},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/khim2020icml-uniform/}
}