All Your Loss Are Belong to Bayes
Abstract
Loss functions are a cornerstone of machine learning and the starting point of most algorithms. Statistics and Bayesian decision theory have contributed, via properness, to elicit over the past decades a wide set of admissible losses in supervised learning, to which most popular choices belong (logistic, square, Matsushita, etc.). Rather than making a potentially biased ad hoc choice of the loss, there has recently been a boost in efforts to fit the loss to the domain at hand while training the model itself. The key approaches fit a canonical link, a function which monotonically relates the closed unit interval to R and can provide a proper loss via integration.
Cite
Text
Walder and Nock. "All Your Loss Are Belong to Bayes." Neural Information Processing Systems, 2020.Markdown
[Walder and Nock. "All Your Loss Are Belong to Bayes." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/walder2020neurips-all/)BibTeX
@inproceedings{walder2020neurips-all,
title = {{All Your Loss Are Belong to Bayes}},
author = {Walder, Christian and Nock, Richard},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/walder2020neurips-all/}
}