Differentiable Sparse Coding

Abstract

Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can pre- serve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate effi- ciently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of appli- cations, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.

Cite

Text

Bagnell and Bradley. "Differentiable Sparse Coding." Neural Information Processing Systems, 2008.

Markdown

[Bagnell and Bradley. "Differentiable Sparse Coding." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/bagnell2008neurips-differentiable/)

BibTeX

@inproceedings{bagnell2008neurips-differentiable,
  title     = {{Differentiable Sparse Coding}},
  author    = {Bagnell, J. A. and Bradley, David M.},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {113-120},
  url       = {https://mlanthology.org/neurips/2008/bagnell2008neurips-differentiable/}
}