Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data
Abstract
Confidence-weighted (CW) learning is a successful online learning paradigm which maintains a Gaussian distribution over classifier weights and adopts a covariancematrix to represent the uncertainties of the weight vectors. However, there are two deficiencies in existing full CW learning paradigms, these being the sensitivity to irrelevant features, and the poor scalability to high dimensional data due to the maintenance of the covariance structure. In this paper, we begin by presenting an online-batch CW learning scheme, and then present a novel paradigm to learn sparse CW classifiers. The proposed paradigm essentially identifies feature groups and naturally builds a block diagonal covariance structure, making it very suitable for CW learning over very high-dimensional data.Extensive experimental results demonstrate the superior performance of the proposed methods over state-of-the-art counterparts on classification and feature selection tasks.
Cite
Text
Tan et al. "Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10281Markdown
[Tan et al. "Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/tan2016aaai-learning/) doi:10.1609/AAAI.V30I1.10281BibTeX
@inproceedings{tan2016aaai-learning,
title = {{Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data}},
author = {Tan, Mingkui and Yan, Yan and Wang, Li and van den Hengel, Anton and Tsang, Ivor W. and Shi, Qinfeng (Javen)},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {2080-2086},
doi = {10.1609/AAAI.V30I1.10281},
url = {https://mlanthology.org/aaai/2016/tan2016aaai-learning/}
}