Learning via Gaussian Herding
Abstract
We introduce a new family of online learning algorithms based upon constraining the velocity flow over a distribution of weight vectors. In particular, we show how to effectively herd a Gaussian weight vector distribution by trading off velocity constraints with a loss function. By uniformly bounding this loss function, we demonstrate how to solve the resulting optimization analytically. We compare the resulting algorithms on a variety of real world datasets, and demonstrate how these algorithms achieve state-of-the-art robust performance, especially with high label noise in the training data.
Cite
Text
Crammer and Lee. "Learning via Gaussian Herding." Neural Information Processing Systems, 2010.Markdown
[Crammer and Lee. "Learning via Gaussian Herding." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/crammer2010neurips-learning/)BibTeX
@inproceedings{crammer2010neurips-learning,
title = {{Learning via Gaussian Herding}},
author = {Crammer, Koby and Lee, Daniel D.},
booktitle = {Neural Information Processing Systems},
year = {2010},
pages = {451-459},
url = {https://mlanthology.org/neurips/2010/crammer2010neurips-learning/}
}