Efficient Online Learning via Randomized Rounding

Abstract

Most online algorithms used in machine learning today are based on variants of mirror descent or follow-the-leader. In this paper, we present an online algorithm based on a completely different approach, which combines ``random playout'' and randomized rounding of loss subgradients. As an application of our approach, we provide the first computationally efficient online algorithm for collaborative filtering with trace-norm constrained matrices. As a second application, we solve an open question linking batch learning and transductive online learning.

Cite

Text

Cesa-bianchi and Shamir. "Efficient Online Learning via Randomized Rounding." Neural Information Processing Systems, 2011.

Markdown

[Cesa-bianchi and Shamir. "Efficient Online Learning via Randomized Rounding." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/cesabianchi2011neurips-efficient/)

BibTeX

@inproceedings{cesabianchi2011neurips-efficient,
  title     = {{Efficient Online Learning via Randomized Rounding}},
  author    = {Cesa-bianchi, Nicolò and Shamir, Ohad},
  booktitle = {Neural Information Processing Systems},
  year      = {2011},
  pages     = {343-351},
  url       = {https://mlanthology.org/neurips/2011/cesabianchi2011neurips-efficient/}
}