The Statistical Scope of Multicalibration

Abstract

We make a connection between multicalibration and property elicitation and show that (under mild technical conditions) it is possible to produce a multicalibrated predictor for a continuous scalar property $\Gamma$ if and only if $\Gamma$ is elicitable. On the negative side, we show that for non-elicitable continuous properties there exist simple data distributions on which even the true distributional predictor is not calibrated. On the positive side, for elicitable $\Gamma$, we give simple canonical algorithms for the batch and the online adversarial setting, that learn a $\Gamma$-multicalibrated predictor. This generalizes past work on multicalibrated means and quantiles, and in fact strengthens existing online quantile multicalibration results. To further counter-weigh our negative result, we show that if a property $\Gamma^1$ is not elicitable by itself, but is elicitable conditionally on another elicitable property $\Gamma^0$, then there is a canonical algorithm that jointly multicalibrates $\Gamma^1$ and $\Gamma^0$; this generalizes past work on mean-moment multicalibration. Finally, as applications of our theory, we provide novel algorithmic and impossibility results for fair (multicalibrated) risk assessment.

Cite

Text

Noarov and Roth. "The Statistical Scope of Multicalibration." International Conference on Machine Learning, 2023.

Markdown

[Noarov and Roth. "The Statistical Scope of Multicalibration." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/noarov2023icml-statistical/)

BibTeX

@inproceedings{noarov2023icml-statistical,
  title     = {{The Statistical Scope of Multicalibration}},
  author    = {Noarov, Georgy and Roth, Aaron},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {26283-26310},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/noarov2023icml-statistical/}
}