Improved and Oracle-Efficient Online $\ell_1$-Multicalibration

Abstract

We study online multicalibration, a framework for ensuring calibrated predictions across multiple groups in adversarial settings, across $T$ rounds. Although online calibration is typically studied in the $\ell_1$ norm, prior approaches to online multicalibration have taken the indirect approach of obtaining rates in other norms (such as $\ell_2$ and $\ell_{\infty}$) and then transferred these guarantees to $\ell_1$ at additional loss. In contrast, we propose a direct method that achieves improved and oracle-efficient rates of $\widetilde{\mathcal{O}}(T^{-1/3})$ and $\widetilde{\mathcal{O}}(T^{-1/4})$ respectively, for online $\ell_1$-multicalibration. Our key insight is a novel reduction of online $\ell_1$-multicalibration to an online learning problem with product-based rewards, which we refer to as online linear-product optimization ($\mathtt{OLPO}$). To obtain the improved rate of $\widetilde{\mathcal{O}}(T^{-1/3})$, we introduce a linearization of $\mathtt{OLPO}$ and design a no-regret algorithm for this linearized problem. Although this method guarantees the desired sublinear rate (nearly matching the best rate for online calibration), it is computationally expensive when the group family $\mathcal{H}$ is large or infinite, since it enumerates all possible groups. To address scalability, we propose a second approach to $\mathtt{OLPO}$ that makes only a polynomial number of calls to an offline optimization (multicalibration evaluation) oracle, resulting in oracle-efficient online $\ell_1$-multicalibration with a corresponding rate of $\widetilde{\mathcal{O}}(T^{-1/4})$. Our framework also extends to certain infinite families of groups (e.g., all linear functions on the context space) by exploiting a $1$-Lipschitz property of the $\ell_1$-multicalibration error with respect to $\mathcal{H}$.

Cite

Text

Ghuge et al. "Improved and Oracle-Efficient Online $\ell_1$-Multicalibration." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Ghuge et al. "Improved and Oracle-Efficient Online $\ell_1$-Multicalibration." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/ghuge2025icml-improved/)

BibTeX

@inproceedings{ghuge2025icml-improved,
  title     = {{Improved and Oracle-Efficient Online $\ell_1$-Multicalibration}},
  author    = {Ghuge, Rohan and Muthukumar, Vidya and Singla, Sahil},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {19437-19457},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/ghuge2025icml-improved/}
}