Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems
Abstract
We examine the problem of designing learning-augmented algorithms for metrical task systems (MTS) that exploit machine-learned advice while maintaining rigorous, worst-case guarantees on performance. We propose an algorithm, DART, that achieves this dual objective, providing cost within a multiplicative factor $(1+\epsilon)$ of the machine-learned advice (i.e., consistency) while ensuring cost within a multiplicative factor $2^{O(1/\epsilon)}$ of a baseline robust algorithm (i.e., robustness) for any $\epsilon > 0$. We show that this exponential tradeoff between consistency and robustness is unavoidable in general, but that in important subclasses of MTS, such as when the metric space has bounded diameter and in the $k$-server problem, our algorithm achieves improved, polynomial tradeoffs between consistency and robustness.
Cite
Text
Christianson et al. "Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems." Artificial Intelligence and Statistics, 2023.Markdown
[Christianson et al. "Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/christianson2023aistats-optimal/)BibTeX
@inproceedings{christianson2023aistats-optimal,
title = {{Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems}},
author = {Christianson, Nicolas and Shen, Junxuan and Wierman, Adam},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {9377-9399},
volume = {206},
url = {https://mlanthology.org/aistats/2023/christianson2023aistats-optimal/}
}