Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors
Abstract
Combining algorithms is one of the key techniques in learning-augmented algorithms. We consider the following problem: We are given $\ell$ heuristics for Metrical Task Systems (MTS), where each might be tailored to a different type of input instances. While processing an input instance received online, we are allowed to query the action of only one of the heuristics at each time step. Our goal is to achieve performance comparable to the best of the given heuristics. The main difficulty of our setting comes from the fact that the cost paid by a heuristic at time $t$ cannot be estimated unless the same heuristic was also queried at time $t-1$. This is related to Bandit Learning against memory bounded adversaries (Arora et al., 2012). We show how to achieve regret of $O(\text{OPT}^{2/3})$ and prove a tight lower bound based on the construction of Dekel et al. (2013).
Cite
Text
Cosa and Elias. "Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Cosa and Elias. "Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/cosa2025icml-learningaugmented/)BibTeX
@inproceedings{cosa2025icml-learningaugmented,
title = {{Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors}},
author = {Cosa, Matei Gabriel and Elias, Marek},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {11357-11378},
volume = {267},
url = {https://mlanthology.org/icml/2025/cosa2025icml-learningaugmented/}
}