On the Calibration of Conditional-Value-at-Risk
Abstract
To promote risk-averse behaviour in safety critical AI applications, Conditional-Value-at-Risk (CVaR)---a spectral risk measure---is largely being employed as a loss aggregation function of choice. We study the calibration and the refinement property of CVaR, by providing an extension of the classical proper scoring risk decomposition for CVaR. Our result suggests a trade-off: CVaR provides tail-sensitive calibration and refinement property, however this is at the cost of calibration and refinement for non-tail events. Our result calls to consider the inherent cost-benefit analysis to employ CVaR as a risk measure of choice for AI Safety.
Cite
Text
Verma et al. "On the Calibration of Conditional-Value-at-Risk." ICML 2024 Workshops: NextGenAISafety, 2024.Markdown
[Verma et al. "On the Calibration of Conditional-Value-at-Risk." ICML 2024 Workshops: NextGenAISafety, 2024.](https://mlanthology.org/icmlw/2024/verma2024icmlw-calibration/)BibTeX
@inproceedings{verma2024icmlw-calibration,
title = {{On the Calibration of Conditional-Value-at-Risk}},
author = {Verma, Rajeev and Fischer, Volker and Nalisnick, Eric},
booktitle = {ICML 2024 Workshops: NextGenAISafety},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/verma2024icmlw-calibration/}
}