Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints

Abstract

There are two main families of on-line algorithms depending on whether a relative entropy or a squared Euclidean distance is used as a regularizer. The difference between the two families can be dramatic. The question is whether one can always achieve comparable performance by replacing the relative entropy regularization by the squared Euclidean distance plus additional linear constraints. We formulate a simple open problem along these lines for the case of learning disjunctions.

Cite

Text

Warmuth. "Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_48

Markdown

[Warmuth. "Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/warmuth2006colt-entropic/) doi:10.1007/11776420_48

BibTeX

@inproceedings{warmuth2006colt-entropic,
  title     = {{Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints}},
  author    = {Warmuth, Manfred K.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2006},
  pages     = {653-654},
  doi       = {10.1007/11776420_48},
  url       = {https://mlanthology.org/colt/2006/warmuth2006colt-entropic/}
}