Non-Convex Feature Learning via L_p, Inf Operator
Abstract
We present a feature selection method for solving sparse regularization problem, which hasa composite regularization of $\ell_p$ norm and $\ell_{\infty}$ norm.We use proximal gradient method to solve this \L1inf operator problem, where a simple but efficient algorithm is designed to minimize a relatively simple objective function, which contains a vector of $\ell_2$ norm and $\ell_\infty$ norm. Proposed method brings some insight for solving sparsity-favoring norm, andextensive experiments are conducted to characterize the effect of varying $p$ and to compare with other approaches on real world multi-class and multi-label datasets.
Cite
Text
Kong and Ding. "Non-Convex Feature Learning via L_p, Inf Operator." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.9010Markdown
[Kong and Ding. "Non-Convex Feature Learning via L_p, Inf Operator." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/kong2014aaai-non/) doi:10.1609/AAAI.V28I1.9010BibTeX
@inproceedings{kong2014aaai-non,
title = {{Non-Convex Feature Learning via L_p, Inf Operator}},
author = {Kong, Deguang and Ding, Chris H. Q.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2014},
pages = {1918-1924},
doi = {10.1609/AAAI.V28I1.9010},
url = {https://mlanthology.org/aaai/2014/kong2014aaai-non/}
}