Super-Samples from Kernel Herding
Abstract
We extend the herding algorithm to continuous spaces by using the kernel trick. The resulting "kernel herding" algorithm is an infinite memory deterministic process that learns to approximate a PDF with a collection of samples. We show that kernel herding decreases the error of expectations of functions in the Hilbert space at a rate O(1/T) which is much faster than the usual O(1/pT) for iid random samples. We illustrate kernel herding by approximating Bayesian predictive distributions.
Cite
Text
Chen et al. "Super-Samples from Kernel Herding." Conference on Uncertainty in Artificial Intelligence, 2010.Markdown
[Chen et al. "Super-Samples from Kernel Herding." Conference on Uncertainty in Artificial Intelligence, 2010.](https://mlanthology.org/uai/2010/chen2010uai-super/)BibTeX
@inproceedings{chen2010uai-super,
title = {{Super-Samples from Kernel Herding}},
author = {Chen, Yutian and Welling, Max and Smola, Alexander J.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2010},
pages = {109-116},
url = {https://mlanthology.org/uai/2010/chen2010uai-super/}
}