Stochastic $L^\natural$-Convex Function Minimization
Abstract
We study an extension of the stochastic submodular minimization problem, namely, the stochastic $L^\natural$-convex minimization problem. We develop the first polynomial-time algorithms that return a near-optimal solution with high probability. We design a novel truncation operation to further reduce the computational complexity of the proposed algorithms. When applied to a stochastic submodular function, the computational complexity of the proposed algorithms is lower than that of the existing stochastic submodular minimization algorithms. In addition, we provide a strongly polynomial approximate algorithm. The algorithm execution also does not require any prior knowledge about the objective function except the $L^\natural$-convexity. A lower bound on the computational complexity that is required to achieve a high probability error bound is also derived. Numerical experiments are implemented to demonstrate the efficiency of our theoretical findings.
Cite
Text
Zhang et al. "Stochastic $L^\natural$-Convex Function Minimization." Neural Information Processing Systems, 2021.Markdown
[Zhang et al. "Stochastic $L^\natural$-Convex Function Minimization." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zhang2021neurips-stochastic/)BibTeX
@inproceedings{zhang2021neurips-stochastic,
title = {{Stochastic $L^\natural$-Convex Function Minimization}},
author = {Zhang, Haixiang and Zheng, Zeyu and Lavaei, Javad},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zhang2021neurips-stochastic/}
}