Optimization by Mean Field Annealing
Abstract
Nearly optimal solutions to many combinatorial problems can be found using stochastic simulated annealing. This paper extends the concept of simulated annealing from its original formulation as a Markov process to a new formulation based on mean field theory. Mean field annealing essentially replaces the discrete de(cid:173) grees of freedom in simulated annealing with their average values as computed by the mean field approximation. The net result is that equilibrium at a given temperature is achieved 1-2 orders of magnitude faster than with simulated annealing. A general frame(cid:173) work for the mean field annealing algorithm is derived, and its re(cid:173) lationship to Hopfield networks is shown. The behavior of MFA is examined both analytically and experimentally for a generic combi(cid:173) natorial optimization problem: graph bipartitioning. This analysis indicates the presence of critical temperatures which could be im(cid:173) portant in improving the performance of neural networks.
Cite
Text
Bilbro et al. "Optimization by Mean Field Annealing." Neural Information Processing Systems, 1988.Markdown
[Bilbro et al. "Optimization by Mean Field Annealing." Neural Information Processing Systems, 1988.](https://mlanthology.org/neurips/1988/bilbro1988neurips-optimization/)BibTeX
@inproceedings{bilbro1988neurips-optimization,
title = {{Optimization by Mean Field Annealing}},
author = {Bilbro, Griff and Mann, Reinhold and Miller, Thomas K. and Snyder, Wesley E. and van den Bout, David E. and White, Mark},
booktitle = {Neural Information Processing Systems},
year = {1988},
pages = {91-98},
url = {https://mlanthology.org/neurips/1988/bilbro1988neurips-optimization/}
}