Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference
Abstract
Mean field inference for discrete graphical models is generally a highly nonconvex problem, which also holds for the class of probabilistic log-submodular models. Existing optimization methods, e.g., coordinate ascent algorithms, typically only find local optima. In this work we propose provable mean filed methods for probabilistic log-submodular models and its posterior agreement (PA) with strong approximation guarantees. The main algorithmic technique is a new Double Greedy scheme, termed DR-DoubleGreedy, for continuous DR-submodular maximization with box-constraints. It is a one-pass algorithm with linear time complexity, reaching the optimal 1/2 approximation ratio, which may be of independent interest. We validate the superior performance of our algorithms against baselines on both synthetic and real-world datasets.
Cite
Text
Bian et al. "Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference." International Conference on Machine Learning, 2019.Markdown
[Bian et al. "Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/bian2019icml-optimal/)BibTeX
@inproceedings{bian2019icml-optimal,
title = {{Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference}},
author = {Bian, Yatao and Buhmann, Joachim and Krause, Andreas},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {644-653},
volume = {97},
url = {https://mlanthology.org/icml/2019/bian2019icml-optimal/}
}