Optimal Robustness-Consistency Trade-Offs for Learning-Augmented Online Algorithms
Abstract
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions. The goal is to design algorithms that are both consistent and robust, meaning that the algorithm performs well when predictions are accurate and maintains worst-case guarantees. Such algorithms have been studied in a recent line of works due to Lykouris and Vassilvitskii (ICML '18) and Purohit et al (NeurIPS '18). They provide robustness-consistency trade-offs for a variety of online problems. However, they leave open the question of whether these trade-offs are tight, i.e., to what extent to such trade-offs are necessary. In this paper, we provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions. We focus on the classic problems of ski-rental and non-clairvoyant scheduling and provide optimal trade-offs in various settings.
Cite
Text
Wei and Zhang. "Optimal Robustness-Consistency Trade-Offs for Learning-Augmented Online Algorithms." Neural Information Processing Systems, 2020.Markdown
[Wei and Zhang. "Optimal Robustness-Consistency Trade-Offs for Learning-Augmented Online Algorithms." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/wei2020neurips-optimal/)BibTeX
@inproceedings{wei2020neurips-optimal,
title = {{Optimal Robustness-Consistency Trade-Offs for Learning-Augmented Online Algorithms}},
author = {Wei, Alexander and Zhang, Fred},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/wei2020neurips-optimal/}
}