Sparse Learning for Stochastic Composite Optimization
Abstract
In this paper, we focus on Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution. Although many SCO algorithms have been developed for sparse learning with an optimal convergence rate $O(1/T)$, they often fail to deliver sparse solutions at the end either because of the limited sparsity regularization during stochastic optimization or due to the limitation in online-to-batch conversion. To improve the sparsity of solutions obtained by SCO, we propose a simple but effective stochastic optimization scheme that adds a novel sparse online-to-batch conversion to the traditional SCO algorithms. The theoretical analysis shows that our scheme can find a solution with better sparse patterns without affecting the convergence rate. Experimental results on both synthetic and real-world data sets show that the proposed methods are more effective in recovering the sparse solution and have comparable convergence rate as the state-of-the-art SCO algorithms for sparse learning.
Cite
Text
Zhang et al. "Sparse Learning for Stochastic Composite Optimization." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.8844Markdown
[Zhang et al. "Sparse Learning for Stochastic Composite Optimization." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/zhang2014aaai-sparse/) doi:10.1609/AAAI.V28I1.8844BibTeX
@inproceedings{zhang2014aaai-sparse,
title = {{Sparse Learning for Stochastic Composite Optimization}},
author = {Zhang, Weizhong and Zhang, Lijun and Hu, Yao and Jin, Rong and Cai, Deng and He, Xiaofei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2014},
pages = {893-900},
doi = {10.1609/AAAI.V28I1.8844},
url = {https://mlanthology.org/aaai/2014/zhang2014aaai-sparse/}
}