Robust Learning for Smoothed Online Convex Optimization with Feedback Delay

Abstract

We study a general form of Smoothed Online Convex Optimization, a.k.a. SOCO, including multi-step switching costs and feedback delay. We propose a novel machine learning (ML) augmented online algorithm, Robustness-Constrained Learning (RCL), which combines untrusted ML predictions with a trusted expert online algorithm via constrained projection to robustify the ML prediction. Specifically, we prove that RCL is able to guarantee $(1+\lambda)$-competitiveness against any given expert for any $\lambda>0$, while also explicitly training the ML model in a robustification-aware manner to improve the average-case performance. Importantly, RCL is the first ML-augmented algorithm with a provable robustness guarantee in the case of multi-step switching cost and feedback delay. We demonstrate the improvement of RCL in both robustness and average performance using battery management as a case study.

Cite

Text

Li et al. "Robust Learning for Smoothed Online Convex Optimization with Feedback Delay." Neural Information Processing Systems, 2023.

Markdown

[Li et al. "Robust Learning for Smoothed Online Convex Optimization with Feedback Delay." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/li2023neurips-robust/)

BibTeX

@inproceedings{li2023neurips-robust,
  title     = {{Robust Learning for Smoothed Online Convex Optimization with Feedback Delay}},
  author    = {Li, Pengfei and Yang, Jianyi and Wierman, Adam and Ren, Shaolei},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/li2023neurips-robust/}
}