Joint Feature Selection and Subspace Learning
Abstract
Dimensionality reduction is a very important topic in machine learning. It can be generally classified into two categories: feature selection and subspace learning. In the past decades, many methods have been proposed for dimensionality reduction. However, most of these works study feature selection and subspace learning independently. In this paper, we present a framework for joint feature selection and subspace learning. We reformulate the subspace learning problem and use L2,1-norm on the projection matrix to achieve row-sparsity, which leads to selecting relevant features and learning transformation simultaneously. We discuss two situations of the proposed framework, and present their optimization algorithms. Experiments on benchmark face recognition data sets illustrate that the proposed framework outperforms the state of the art methods overwhelmingly.
Cite
Text
Gu et al. "Joint Feature Selection and Subspace Learning." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-219Markdown
[Gu et al. "Joint Feature Selection and Subspace Learning." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/gu2011ijcai-joint/) doi:10.5591/978-1-57735-516-8/IJCAI11-219BibTeX
@inproceedings{gu2011ijcai-joint,
title = {{Joint Feature Selection and Subspace Learning}},
author = {Gu, Quanquan and Li, Zhenhui and Han, Jiawei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2011},
pages = {1294-1299},
doi = {10.5591/978-1-57735-516-8/IJCAI11-219},
url = {https://mlanthology.org/ijcai/2011/gu2011ijcai-joint/}
}