A Variational Method for Learning Sparse and Overcomplete Representations

Abstract

An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.

Cite

Text

Girolami. "A Variational Method for Learning Sparse and Overcomplete Representations." Neural Computation, 2001. doi:10.1162/089976601753196003

Markdown

[Girolami. "A Variational Method for Learning Sparse and Overcomplete Representations." Neural Computation, 2001.](https://mlanthology.org/neco/2001/girolami2001neco-variational/) doi:10.1162/089976601753196003

BibTeX

@article{girolami2001neco-variational,
  title     = {{A Variational Method for Learning Sparse and Overcomplete Representations}},
  author    = {Girolami, Mark A.},
  journal   = {Neural Computation},
  year      = {2001},
  pages     = {2517-2532},
  doi       = {10.1162/089976601753196003},
  volume    = {13},
  url       = {https://mlanthology.org/neco/2001/girolami2001neco-variational/}
}