Probit Classifiers with a Generalized Gaussian Scale Mixture Prior

Abstract

Most of the existing probit classifiers are based on sparsity-oriented modeling. However, we show that sparsity is not always desirable in practice, and only an appropriate degree of sparsity is profitable. In this work, we propose a flexible probabilistic model using a generalized Gaussian scale mixture prior that can promote an appropriate degree of sparsity for its model parameters, and yield either sparse or non-sparse estimates according to the intrinsic sparsity of features in a dataset. Model learning is carried out by an efficient modified maximum a posteriori (MAP) estimate. We also show relationships of the proposed model to existing probit classifiers as well as iteratively re-weighted l1 and l2 minimizations. Experiments demonstrate that the proposed method has better or comparable performances in feature selection for linear classifiers as well as in kernel-based classification.

Cite

Text

Liu et al. "Probit Classifiers with a Generalized Gaussian Scale Mixture Prior." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-232

Markdown

[Liu et al. "Probit Classifiers with a Generalized Gaussian Scale Mixture Prior." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/liu2011ijcai-probit/) doi:10.5591/978-1-57735-516-8/IJCAI11-232

BibTeX

@inproceedings{liu2011ijcai-probit,
  title     = {{Probit Classifiers with a Generalized Gaussian Scale Mixture Prior}},
  author    = {Liu, Guoqing and Wu, Jianxin and Zhou, Suiping},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2011},
  pages     = {1372-1377},
  doi       = {10.5591/978-1-57735-516-8/IJCAI11-232},
  url       = {https://mlanthology.org/ijcai/2011/liu2011ijcai-probit/}
}