Particle Filtered MCMC-MLE with Connections to Contrastive Divergence
Abstract
Learning undirected graphical models such as Markov random fields is an important machine learning task with applications in many domains. Since it is usually intractable to learn these models exactly, various approximate learning techniques have been developed, such as contrastive divergence (CD) and Markov chain Monte Carlo maximum likelihood estimation (MCMC-MLE). In this paper, we introduce particle filtered MCMC-MLE, which is a sampling-importance-re sampling version of MCMC-MLE with additional MCMC rejuvenation steps. We also describe a unified view of (1) MCMC-MLE, (2) our particle filtering approach, and (3) a stochastic approximation procedure known as persistent contrastive divergence. We show how these approaches are related to each other and discuss the relative merits of each approach. Empirical results on various undirected models demonstrate that the particle filtering technique we propose in this paper can significantly outperform MCMC-MLE. Furthermore, in certain cases, the proposed technique is faster than persistent CD.
Cite
Text
Asuncion et al. "Particle Filtered MCMC-MLE with Connections to Contrastive Divergence." International Conference on Machine Learning, 2010.Markdown
[Asuncion et al. "Particle Filtered MCMC-MLE with Connections to Contrastive Divergence." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/asuncion2010icml-particle/)BibTeX
@inproceedings{asuncion2010icml-particle,
title = {{Particle Filtered MCMC-MLE with Connections to Contrastive Divergence}},
author = {Asuncion, Arthur U. and Liu, Qiang and Ihler, Alexander T. and Smyth, Padhraic},
booktitle = {International Conference on Machine Learning},
year = {2010},
pages = {47-54},
url = {https://mlanthology.org/icml/2010/asuncion2010icml-particle/}
}