Aggregating Algorithm and Axiomatic Loss Aggregation
Abstract
Supervised learning has gone beyond the empirical risk minimization framework. Central to most of these developments is the introduction of more general aggregation functions for losses incurred by the learner. In this paper, we turn towards online learning under expert advice. Via easily justified assumptions we characterize a set of reasonable loss aggregation functions as quasi-sums. Based upon this insight, we suggest how to tailor Vovk's Aggregating Algorithm to these more general aggregation functions. The "change of variables" we propose, let us highlight that "weighting profiles" determine the contribution of each expert to the next prediction according to their loss and the multiplicative structure of the weight updates in the Aggregating Algorithm translates into the additive structure of the loss aggregation in the regret bound. In addition, we suggest that the mixability of the loss function, which is functionally necessary for the Aggregating Algorithm, is intrinsically relative to the log loss, because the standard aggregation of losses in online learning is the sum. Finally, we conceptually and empirically argue that our generalized loss aggregation functions express the attitude of the learner towards losses.
Cite
Text
Pacheco et al. "Aggregating Algorithm and Axiomatic Loss Aggregation." Transactions on Machine Learning Research, 2025.Markdown
[Pacheco et al. "Aggregating Algorithm and Axiomatic Loss Aggregation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/pacheco2025tmlr-aggregating/)BibTeX
@article{pacheco2025tmlr-aggregating,
title = {{Aggregating Algorithm and Axiomatic Loss Aggregation}},
author = {Pacheco, Armando J Cabrera and Derr, Rabanus and Williamson, Robert},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/pacheco2025tmlr-aggregating/}
}