Perspectives on Sparse Bayesian Learning

Abstract

Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learn- ing using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters govern- ing the prior, one for each weight, and then adopts a specific approxi- mation to the full marginalization over all weights and hyperparameters. Despite its empirical success however, no rigorous motivation for this particular approximation is currently available. To address this issue, we demonstrate that SBL can be recast as the application of a rigorous vari- ational approximation to the full model by expressing the prior in a dual form. This formulation obviates the necessity of assuming any hyperpri- ors and leads to natural, intuitive explanations of why sparsity is achieved in practice.

Cite

Text

Palmer et al. "Perspectives on Sparse Bayesian Learning." Neural Information Processing Systems, 2003.

Markdown

[Palmer et al. "Perspectives on Sparse Bayesian Learning." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/palmer2003neurips-perspectives/)

BibTeX

@inproceedings{palmer2003neurips-perspectives,
  title     = {{Perspectives on Sparse Bayesian Learning}},
  author    = {Palmer, Jason and Rao, Bhaskar D. and Wipf, David P.},
  booktitle = {Neural Information Processing Systems},
  year      = {2003},
  pages     = {249-256},
  url       = {https://mlanthology.org/neurips/2003/palmer2003neurips-perspectives/}
}