Population Markov Chain Monte Carlo

Abstract

Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a Metropolis-Hastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physics-inspired local reversibility conditions.

Cite

Text

Laskey and Myers. "Population Markov Chain Monte Carlo." Machine Learning, 2003. doi:10.1023/A:1020206129842

Markdown

[Laskey and Myers. "Population Markov Chain Monte Carlo." Machine Learning, 2003.](https://mlanthology.org/mlj/2003/laskey2003mlj-population/) doi:10.1023/A:1020206129842

BibTeX

@article{laskey2003mlj-population,
  title     = {{Population Markov Chain Monte Carlo}},
  author    = {Laskey, Kathryn B. and Myers, James W.},
  journal   = {Machine Learning},
  year      = {2003},
  pages     = {175-196},
  doi       = {10.1023/A:1020206129842},
  volume    = {50},
  url       = {https://mlanthology.org/mlj/2003/laskey2003mlj-population/}
}