Augmenting Online Algorithms with $\varepsilon$-Accurate Predictions
Abstract
The growing body of work in learning-augmented online algorithms studies how online algorithms can be improved when given access to ML predictions about the future. Motivated by ML models that give a confidence parameter for their predictions, we study online algorithms with predictions that are $\epsilon$-accurate: namely, each prediction is correct with probability (at least) $\epsilon$, but can be arbitrarily inaccurate with the remaining probability. We show that even with predictions that are accurate with a small probability and arbitrarily inaccurate otherwise, we can dramatically outperform worst-case bounds for a range of classical online problems including caching, online set cover, and online facility location. Our main results are an $O(\log(1/\varepsilon))$-competitive algorithm for caching, and a simple $O(1/\varepsilon)$-competitive algorithm for a large family of covering problems, including set cover and facility location, with $\epsilon$-accurate predictions.
Cite
Text
Gupta et al. "Augmenting Online Algorithms with $\varepsilon$-Accurate Predictions." Neural Information Processing Systems, 2022.Markdown
[Gupta et al. "Augmenting Online Algorithms with $\varepsilon$-Accurate Predictions." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/gupta2022neurips-augmenting/)BibTeX
@inproceedings{gupta2022neurips-augmenting,
title = {{Augmenting Online Algorithms with $\varepsilon$-Accurate Predictions}},
author = {Gupta, Anupam and Panigrahi, Debmalya and Subercaseaux, Bernardo and Sun, Kevin},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/gupta2022neurips-augmenting/}
}