Learning Nonlinear Overcomplete Representations for Efficient Coding

Abstract

We derive a learning algorithm for inferring an overcomplete basis by viewing it as probabilistic model of the observed data. Over(cid:173) complete bases allow for better approximation of the underlying statistical density. Using a Laplacian prior on the basis coefficients removes redundancy and leads to representations that are sparse and are a nonlinear function of the data. This can be viewed as a generalization of the technique of independent component anal(cid:173) ysis and provides a method for blind source separation of fewer mixtures than sources. We demonstrate the utility of overcom(cid:173) plete representations on natural speech and show that compared to the traditional Fourier basis the inferred representations poten(cid:173) tially have much greater coding efficiency.

Cite

Text

Lewicki and Sejnowski. "Learning Nonlinear Overcomplete Representations for Efficient Coding." Neural Information Processing Systems, 1997.

Markdown

[Lewicki and Sejnowski. "Learning Nonlinear Overcomplete Representations for Efficient Coding." Neural Information Processing Systems, 1997.](https://mlanthology.org/neurips/1997/lewicki1997neurips-learning/)

BibTeX

@inproceedings{lewicki1997neurips-learning,
  title     = {{Learning Nonlinear Overcomplete Representations for Efficient Coding}},
  author    = {Lewicki, Michael S. and Sejnowski, Terrence J.},
  booktitle = {Neural Information Processing Systems},
  year      = {1997},
  pages     = {556-562},
  url       = {https://mlanthology.org/neurips/1997/lewicki1997neurips-learning/}
}