On One Method of Non-Diagonal Regularization in Sparse Bayesian Learning
Abstract
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods for classification. Transforming Hessian matrix of log-likelihood function to diagonal form with further regularization of its eigenvectors allows us to optimize evidence explicitly as a product of one-dimensional integrals. The process of automatic regularization coeffcients determination then converges in one iteration. We show how to use the proposed approach for Gaussian and Laplace priors. Both algorithms show comparable performance with the stateof-the-art Relevance Vector Machines (RVM) but require less time for training and produce more sparse decision rules (in terms of degrees of freedom).
Cite
Text
Kropotov and Vetrov. "On One Method of Non-Diagonal Regularization in Sparse Bayesian Learning." International Conference on Machine Learning, 2007. doi:10.1145/1273496.1273554Markdown
[Kropotov and Vetrov. "On One Method of Non-Diagonal Regularization in Sparse Bayesian Learning." International Conference on Machine Learning, 2007.](https://mlanthology.org/icml/2007/kropotov2007icml-one/) doi:10.1145/1273496.1273554BibTeX
@inproceedings{kropotov2007icml-one,
title = {{On One Method of Non-Diagonal Regularization in Sparse Bayesian Learning}},
author = {Kropotov, Dmitry and Vetrov, Dmitry P.},
booktitle = {International Conference on Machine Learning},
year = {2007},
pages = {457-464},
doi = {10.1145/1273496.1273554},
url = {https://mlanthology.org/icml/2007/kropotov2007icml-one/}
}