The Power of Randomization: Distributed Submodular Maximization on Massive Datasets

Abstract

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. Unfortunately, the resulting submodular optimization problems are often too large to be solved on a single machine. We consider a distributed, greedy algorithm that combines previous approaches with randomization. The result is an algorithm that is embarrassingly parallel and achieves provable, constant factor, worst-case approximation guarantees. In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting.

Cite

Text

Barbosa et al. "The Power of Randomization: Distributed Submodular Maximization on Massive Datasets." International Conference on Machine Learning, 2015.

Markdown

[Barbosa et al. "The Power of Randomization: Distributed Submodular Maximization on Massive Datasets." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/barbosa2015icml-power/)

BibTeX

@inproceedings{barbosa2015icml-power,
  title     = {{The Power of Randomization: Distributed Submodular Maximization on Massive Datasets}},
  author    = {Barbosa, Rafael and Ene, Alina and Nguyen, Huy and Ward, Justin},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {1236-1244},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/barbosa2015icml-power/}
}