Multicalibration as Boosting for Regression
Abstract
We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a “swap regret” like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class $\mathcal{H}$ that makes use only of a standard squared error regression oracle for $\mathcal{H}$. We give a weak learning assumption on $\mathcal{H}$ that ensures convergence to Bayes optimality without the need to make any realizability assumptions — giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on $\mathcal{H}$ is both necessary and sufficient for multicalibration with respect to $\mathcal{H}$ to imply Bayes optimality, answering an open question. We also show that if $\mathcal{H}$ satisfies our weak learning condition relative to another class $\mathcal{C}$ then multicalibration with respect to $\mathcal{H}$ implies multicalibration with respect to $\mathcal{C}$. Finally we investigate the empirical performance of our algorithm experimentally.
Cite
Text
Globus-Harris et al. "Multicalibration as Boosting for Regression." International Conference on Machine Learning, 2023.Markdown
[Globus-Harris et al. "Multicalibration as Boosting for Regression." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/globusharris2023icml-multicalibration/)BibTeX
@inproceedings{globusharris2023icml-multicalibration,
title = {{Multicalibration as Boosting for Regression}},
author = {Globus-Harris, Ira and Harrison, Declan and Kearns, Michael and Roth, Aaron and Sorrell, Jessica},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {11459-11492},
volume = {202},
url = {https://mlanthology.org/icml/2023/globusharris2023icml-multicalibration/}
}