Convergence of Online K-Means
Abstract
We prove asymptotic convergence for a general class of k-means algorithms performed over streaming data from a distribution–the centers asymptotically converge to the set of stationary points of the k-means objective function. To do so, we show that online k-means over a distribution can be interpreted as stochastic gradient descent with a stochastic learning rate schedule. Then, we prove convergence by extending techniques used in optimization literature to handle settings where center-specific learning rates may depend on the past trajectory of the centers.
Cite
Text
So et al. "Convergence of Online K-Means." Artificial Intelligence and Statistics, 2022.Markdown
[So et al. "Convergence of Online K-Means." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/so2022aistats-convergence/)BibTeX
@inproceedings{so2022aistats-convergence,
title = {{Convergence of Online K-Means}},
author = {So, Geelon and Mahajan, Gaurav and Dasgupta, Sanjoy},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {8534-8569},
volume = {151},
url = {https://mlanthology.org/aistats/2022/so2022aistats-convergence/}
}