Scale-Invariant Unconstrained Online Learning
Abstract
We consider a variant of online convex optimization in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scale-invariant. We start with the case of coordinate-wise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinate-wise scale-invariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which "almost" achieves the desired bound, incurring only a logarithmic overhead in terms of the norm of the instances.
Cite
Text
Kotłowski. "Scale-Invariant Unconstrained Online Learning." Proceedings of the 28th International Conference on Algorithmic Learning Theory, 2017.Markdown
[Kotłowski. "Scale-Invariant Unconstrained Online Learning." Proceedings of the 28th International Conference on Algorithmic Learning Theory, 2017.](https://mlanthology.org/alt/2017/kotowski2017alt-scaleinvariant/)BibTeX
@inproceedings{kotowski2017alt-scaleinvariant,
title = {{Scale-Invariant Unconstrained Online Learning}},
author = {Kotłowski, Wojciech},
booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory},
year = {2017},
pages = {412-433},
volume = {76},
url = {https://mlanthology.org/alt/2017/kotowski2017alt-scaleinvariant/}
}