Robustifying Learning-Augmented Caching Efficiently Without Compromising 1-Consistency
Abstract
The online caching problem aims to minimize cache misses when serving a sequence of requests under a limited cache size. While naive learning-augmented caching algorithms achieve ideal $1$-consistency, they lack robustness guarantees. Existing robustification methods either sacrifice $1$-consistency or introduce excessive computational overhead. In this paper, we introduce Guard, a lightweight robustification framework that enhances the robustness of a broad class of learning-augmented caching algorithms to $2H_{k-1} + 2$, while preserving their $1$-consistency. Guard achieves the current best-known trade-off between consistency and robustness, with only $\mathcal{O}(1)$ additional per-request overhead, thereby maintaining the original time complexity of the base algorithm. Extensive experiments across multiple real-world datasets and prediction models validate the effectiveness of Guard in practice.
Cite
Text
Chen et al. "Robustifying Learning-Augmented Caching Efficiently Without Compromising 1-Consistency." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "Robustifying Learning-Augmented Caching Efficiently Without Compromising 1-Consistency." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-robustifying/)BibTeX
@inproceedings{chen2025neurips-robustifying,
title = {{Robustifying Learning-Augmented Caching Efficiently Without Compromising 1-Consistency}},
author = {Chen, Peng and Zhao, Hailiang and Zhang, Jiaji and Tang, Xueyan and Wang, Yixuan and Deng, Shuiguang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-robustifying/}
}