Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems

Abstract

This paper introduces a family of learning-augmented algorithms for online knapsack problems that achieve near Pareto-optimal consistency-robustness trade-offs through a simple combination of trusted learning-augmented and worst-case algorithms. Our approach relies on succinct, practical predictions—single values or intervals estimating the minimum value of any item in an offline solution. Additionally, we propose a novel fractional-to-integral conversion procedure, offering new insights for online algorithm design.

Cite

Text

Daneshvaramoli et al. "Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Daneshvaramoli et al. "Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/daneshvaramoli2025icml-nearoptimal/)

BibTeX

@inproceedings{daneshvaramoli2025icml-nearoptimal,
  title     = {{Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems}},
  author    = {Daneshvaramoli, Mohammadreza and Karisani, Helia and Lechowicz, Adam and Sun, Bo and Musco, Cameron N and Hajiesmaili, Mohammad},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {12459-12489},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/daneshvaramoli2025icml-nearoptimal/}
}