Projection-Free Online Learning over Strongly Convex Sets

Abstract

To efficiently solve online problems with complicated constraints, projection-free algorithms including online frank-wolfe (OFW) and its variants have received significant interest recently. However, in the general case, existing efficient projection-free algorithms only achieved the regret bound of O(T^3/4), which is worse than the regret of projection-based algorithms, where T is the number of decision rounds. In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW can enjoy a better regret bound of O(T^2/3) for general convex losses. The key idea is to refine the decaying step-size in the original OFW by a simple line search rule. Furthermore, for strongly convex losses, we propose a strongly convex variant of OFW by redefining the surrogate loss function in OFW. We show that it achieves a regret bound of O(T^2/3) over general convex sets and a better regret bound of O(T^1/2) over strongly convex sets.

Cite

Text

Wan and Zhang. "Projection-Free Online Learning over Strongly Convex Sets." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I11.17209

Markdown

[Wan and Zhang. "Projection-Free Online Learning over Strongly Convex Sets." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/wan2021aaai-projection-a/) doi:10.1609/AAAI.V35I11.17209

BibTeX

@inproceedings{wan2021aaai-projection-a,
  title     = {{Projection-Free Online Learning over Strongly Convex Sets}},
  author    = {Wan, Yuanyu and Zhang, Lijun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {10076-10084},
  doi       = {10.1609/AAAI.V35I11.17209},
  url       = {https://mlanthology.org/aaai/2021/wan2021aaai-projection-a/}
}