A Survey of Learning Criteria Going Beyond the Usual Risk (Abstract Reprint)

Abstract

Virtually all machine learning tasks are characterized using some form of loss function, and "good performance" is typically stated in terms of a sufficiently small average loss, taken over the random draw of test data. While optimizing for performance on average is intuitive, convenient to analyze in theory, and easy to implement in practice, such a choice brings about trade-offs. In this work, we survey and introduce a wide variety of non-traditional criteria used to design and evaluate machine learning algorithms, place the classical paradigm within the proper historical context, and propose a view of learning problems which emphasizes the question of "what makes for a desirable loss distribution?" in place of tacit use of the expected loss.

Cite

Text

Holland and Tanabe. "A Survey of Learning Criteria Going Beyond the Usual Risk (Abstract Reprint)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I20.30598

Markdown

[Holland and Tanabe. "A Survey of Learning Criteria Going Beyond the Usual Risk (Abstract Reprint)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/holland2024aaai-survey/) doi:10.1609/AAAI.V38I20.30598

BibTeX

@inproceedings{holland2024aaai-survey,
  title     = {{A Survey of Learning Criteria Going Beyond the Usual Risk (Abstract Reprint)}},
  author    = {Holland, Matthew J. and Tanabe, Kazuki},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {22698},
  doi       = {10.1609/AAAI.V38I20.30598},
  url       = {https://mlanthology.org/aaai/2024/holland2024aaai-survey/}
}