Near-Optimality of Contrastive Divergence Algorithms

Abstract

We provide a non-asymptotic analysis of the contrastive divergence (CD) algorithm, a training method for unnormalized models. While prior work has established that (for exponential family distributions) the CD iterates asymptotically converge at an $O(n^{-1 / 3})$ rate to the true parameter of the data distribution, we show that CD can achieve the parametric rate $O(n^{-1 / 2})$. Our analysis provides results for various data batching schemes, including fully online and minibatch. We additionally show that CD is near-optimal, in the sense that its asymptotic variance is close to the Cramér-Rao lower bound.

Cite

Text

Glaser et al. "Near-Optimality of Contrastive Divergence Algorithms." Neural Information Processing Systems, 2024. doi:10.52202/079017-2890

Markdown

[Glaser et al. "Near-Optimality of Contrastive Divergence Algorithms." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/glaser2024neurips-nearoptimality/) doi:10.52202/079017-2890

BibTeX

@inproceedings{glaser2024neurips-nearoptimality,
  title     = {{Near-Optimality of Contrastive Divergence Algorithms}},
  author    = {Glaser, Pierre and Huang, Kevin Han and Gretton, Arthur},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2890},
  url       = {https://mlanthology.org/neurips/2024/glaser2024neurips-nearoptimality/}
}