Online Learning for Structured Loss Spaces

Abstract

We consider prediction with expert advice when the loss vectors are assumed to lie in a set described by the sum of atomic norm balls. We derive a regret bound for a general version of the online mirror descent (OMD) algorithm that uses a combination of regularizers, each adapted to the constituent atomic norms. The general result recovers standard OMD regret bounds, and yields regret bounds for new structured settings where the loss vectors are (i) noisy versions of vectors from a low-dimensional subspace, (ii) sparse vectors corrupted with noise, and (iii) sparse perturbations of low-rank vectors. For the problem of online learning with structured losses, we also show lower bounds on regret in terms of rank and sparsity of the loss vectors, which implies lower bounds for the above additive loss settings as well.

Cite

Text

Barman et al. "Online Learning for Structured Loss Spaces." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11669

Markdown

[Barman et al. "Online Learning for Structured Loss Spaces." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/barman2018aaai-online/) doi:10.1609/AAAI.V32I1.11669

BibTeX

@inproceedings{barman2018aaai-online,
  title     = {{Online Learning for Structured Loss Spaces}},
  author    = {Barman, Siddharth and Gopalan, Aditya and Saha, Aadirupa},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {2696-2703},
  doi       = {10.1609/AAAI.V32I1.11669},
  url       = {https://mlanthology.org/aaai/2018/barman2018aaai-online/}
}