Continuous Submodular Maximization: Beyond DR-Submodularity

Abstract

In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called \COORDINATE-ASCENT+, achieves a $(\frac{e-1}{2e-1}-\eps)$-approximation guarantee while performing $O(n/\epsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\epsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem). We then propose \COORDINATE-ASCENT++, that achieves the tight $(1-1/e-\eps)$-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly $O(n^3/\eps^{2.5} + n^3 \log n / \eps^2)$ per iteration. However, the computation of each round of \COORDINATE-ASCENT++ can be easily parallelized so that the computational cost per machine scales as $O(n/\sqrt{\epsilon}+n\log n)$.

Cite

Text

Feldman and Karbasi. "Continuous Submodular Maximization: Beyond DR-Submodularity." Neural Information Processing Systems, 2020.

Markdown

[Feldman and Karbasi. "Continuous Submodular Maximization: Beyond DR-Submodularity." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/feldman2020neurips-continuous/)

BibTeX

@inproceedings{feldman2020neurips-continuous,
  title     = {{Continuous Submodular Maximization: Beyond DR-Submodularity}},
  author    = {Feldman, Moran and Karbasi, Amin},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/feldman2020neurips-continuous/}
}