Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions

Abstract

We study the problem of maximizing deep submodular functions (DSFs) subject to a matroid constraint. DSFs are an expressive class of submodular functions that include, as strict subfamilies, the facility location, weighted coverage, and sums of concave composed with modular functions. We use a strategy similar to the continuous greedy approach, but we show that the multilinear extension of any DSF has a natural and computationally attainable concave relaxation that we can optimize using gradient ascent. Our results show a guarantee of $\max_{0<\delta<1}(1-\epsilon-\delta-e^{-\delta^2\Omega(k)})$ with a running time of $O(\nicefrac{n^2}{\epsilon^2})$ plus time for pipage rounding to recover a discrete solution, where $k$ is the rank of the matroid constraint. This bound is often better than the standard $1-1/e$ guarantee of the continuous greedy algorithm, but runs much faster. Our bound also holds even for fully curved ($c=1$) functions where the guarantee of $1-c/e$ degenerates to $1-1/e$ where $c$ is the curvature of $f$. We perform computational experiments that support our theoretical results.

Cite

Text

Bai et al. "Submodular Maximization via Gradient Ascent: The Case of Deep Submodular   Functions." Neural Information Processing Systems, 2018.

Markdown

[Bai et al. "Submodular Maximization via Gradient Ascent: The Case of Deep Submodular   Functions." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/bai2018neurips-submodular/)

BibTeX

@inproceedings{bai2018neurips-submodular,
  title     = {{Submodular Maximization via Gradient Ascent: The Case of Deep Submodular   Functions}},
  author    = {Bai, Wenruo and Noble, William Stafford and Bilmes, Jeff A.},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {7978-7988},
  url       = {https://mlanthology.org/neurips/2018/bai2018neurips-submodular/}
}