Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption
Abstract
In this paper we generalize the framework of the Feasible Descent Method (FDM) to a Randomized (R-FDM) and a Randomized Coordinate-wise Feasible Descent Method (RC-FDM) framework. We show that many machine learning algorithms, including the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM. We prove linear convergence for both R-FDM and RC-FDM under the weak strong convexity assumption. Moreover, we show that the duality gap converges linearly for RC-FDM, which implies that the duality gap also converges linearly for SDCA applied to the SVM dual problem.
Cite
Text
Ma et al. "Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption." Journal of Machine Learning Research, 2016.Markdown
[Ma et al. "Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption." Journal of Machine Learning Research, 2016.](https://mlanthology.org/jmlr/2016/ma2016jmlr-linear/)BibTeX
@article{ma2016jmlr-linear,
title = {{Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption}},
author = {Ma, Chenxin and Tappenden, Rachael and Takáč, Martin},
journal = {Journal of Machine Learning Research},
year = {2016},
pages = {1-24},
volume = {17},
url = {https://mlanthology.org/jmlr/2016/ma2016jmlr-linear/}
}