Disciplined Convex Stochastic Programming: A New Framework for Stochastic Optimization
Abstract
We introduce disciplined convex stochastic programming (DCSP), a modeling framework that can significantly lower the barrier for modelers to specify and solve convex stochastic optimization problems, by allowing modelers to naturally express a wide variety of convex stochastic programs in a manner that reflects their underlying mathematical representation. DCSP allows modelers to express expectations of arbitrary expressions, partial optimizations, and chance constraints across a wide variety of convex optimization problem families (e.g., linear, quadratic, second order cone, and semidefinite programs). We illustrate DCSP's expressivity through a number of sample implementations of problems drawn from the operations research, finance, and machine learning literatures.
Cite
Text
Ali et al. "Disciplined Convex Stochastic Programming: A New Framework for Stochastic Optimization." Conference on Uncertainty in Artificial Intelligence, 2015.Markdown
[Ali et al. "Disciplined Convex Stochastic Programming: A New Framework for Stochastic Optimization." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/ali2015uai-disciplined/)BibTeX
@inproceedings{ali2015uai-disciplined,
title = {{Disciplined Convex Stochastic Programming: A New Framework for Stochastic Optimization}},
author = {Ali, Alnur and Kolter, J. Zico and Diamond, Steven and Boyd, Stephen P.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2015},
pages = {62-71},
url = {https://mlanthology.org/uai/2015/ali2015uai-disciplined/}
}