Mixing Predictions for Online Metric Algorithms

Abstract

A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.

Cite

Text

Antoniadis et al. "Mixing Predictions for Online Metric Algorithms." International Conference on Machine Learning, 2023.

Markdown

[Antoniadis et al. "Mixing Predictions for Online Metric Algorithms." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/antoniadis2023icml-mixing/)

BibTeX

@inproceedings{antoniadis2023icml-mixing,
  title     = {{Mixing Predictions for Online Metric Algorithms}},
  author    = {Antoniadis, Antonios and Coester, Christian and Elias, Marek and Polak, Adam and Simon, Bertrand},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {969-983},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/antoniadis2023icml-mixing/}
}