A Comparison of Optimization Methods and Software for Large-Scale L1-Regularized Linear Classification
Abstract
Large-scale linear classification is widely used in many areas. The L1-regularized form can be applied for feature selection; however, its non-differentiability causes more difficulties in training. Although various optimization methods have been proposed in recent years, these have not yet been compared suitably. In this paper, we first broadly review existing methods. Then, we discuss state-of-the-art software packages in detail and propose two efficient implementations. Extensive comparisons indicate that carefully implemented coordinate descent methods are very suitable for training large document data.
Cite
Text
Yuan et al. "A Comparison of Optimization Methods and Software for Large-Scale L1-Regularized Linear Classification." Journal of Machine Learning Research, 2010.Markdown
[Yuan et al. "A Comparison of Optimization Methods and Software for Large-Scale L1-Regularized Linear Classification." Journal of Machine Learning Research, 2010.](https://mlanthology.org/jmlr/2010/yuan2010jmlr-comparison/)BibTeX
@article{yuan2010jmlr-comparison,
title = {{A Comparison of Optimization Methods and Software for Large-Scale L1-Regularized Linear Classification}},
author = {Yuan, Guo-Xun and Chang, Kai-Wei and Hsieh, Cho-Jui and Lin, Chih-Jen},
journal = {Journal of Machine Learning Research},
year = {2010},
pages = {3183-3234},
volume = {11},
url = {https://mlanthology.org/jmlr/2010/yuan2010jmlr-comparison/}
}