Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation

Abstract

Multinomial logistic regression provides the standard penalised maximum- likelihood solution to multi-class pattern recognition problems. More recently, the development of sparse multinomial logistic regression models has found ap- plication in text processing and microarray classification, where explicit identifi- cation of the most informative features is of value. In this paper, we propose a sparse multinomial logistic regression method, in which the sparsity arises from the use of a Laplace prior, but where the usual regularisation parameter is inte- grated out analytically. Evaluation over a range of benchmark datasets reveals this approach results in similar generalisation performance to that obtained using cross-validation, but at greatly reduced computational expense.

Cite

Text

Cawley et al. "Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation." Neural Information Processing Systems, 2006.

Markdown

[Cawley et al. "Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/cawley2006neurips-sparse/)

BibTeX

@inproceedings{cawley2006neurips-sparse,
  title     = {{Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation}},
  author    = {Cawley, Gavin C. and Talbot, Nicola L. and Girolami, Mark},
  booktitle = {Neural Information Processing Systems},
  year      = {2006},
  pages     = {209-216},
  url       = {https://mlanthology.org/neurips/2006/cawley2006neurips-sparse/}
}