Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions
Abstract
Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of “simple” functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster \emphlinear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations.
Cite
Text
Ene and Nguyen. "Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions." International Conference on Machine Learning, 2015.Markdown
[Ene and Nguyen. "Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/ene2015icml-random/)BibTeX
@inproceedings{ene2015icml-random,
title = {{Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions}},
author = {Ene, Alina and Nguyen, Huy},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {787-795},
volume = {37},
url = {https://mlanthology.org/icml/2015/ene2015icml-random/}
}