Lock-Free Optimization for Non-Convex Problems
Abstract
Stochastic gradient descent (SGD) and its variants have attracted much attention in machine learning due to their efficiency and effectiveness for optimization. To handle large-scale problems, researchers have recently proposed several lock-free strategy based parallel SGD (LF-PSGD) methods for multi-core systems. However, existing works have only proved the convergence of these LF-PSGD methods for convex problems. To the best of our knowledge, no work has proved the convergence of the LF-PSGD methods for non-convex problems. In this paper, we provide the theoretical proof about the convergence of two representative LF-PSGD methods, Hogwild! and AsySVRG, for non-convex problems. Empirical results also show that both Hogwild! and AsySVRG are convergent on non-convex problems, which successfully verifies our theoretical results.
Cite
Text
Zhao et al. "Lock-Free Optimization for Non-Convex Problems." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10921Markdown
[Zhao et al. "Lock-Free Optimization for Non-Convex Problems." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/zhao2017aaai-lock/) doi:10.1609/AAAI.V31I1.10921BibTeX
@inproceedings{zhao2017aaai-lock,
title = {{Lock-Free Optimization for Non-Convex Problems}},
author = {Zhao, Shen-Yi and Zhang, Gong-Duo and Li, Wu-Jun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {2935-2941},
doi = {10.1609/AAAI.V31I1.10921},
url = {https://mlanthology.org/aaai/2017/zhao2017aaai-lock/}
}