A Stochastic Algorithm for Sinkhorn Distance-Regularized Distributionally Robust Optimization
Abstract
Distributionally Robust Optimization (DRO) is a powerful modeling technique to tackle the challenge caused by data distribution shifts. This paper focuses on Sinkhorn distance regularized DRO. We generalize Sinkhorn distance allowing broader function choices to model ambiguity set and derive the lagrangian dual taking the form of nested stochastic programming. We also design the algorithm based on stochastic gradient descent with easy-to-implement constant learning rate. Unlike previous work doing algorithm analysis for convex and bounded loss function, our algorithm provides convergence guarantee for non-convex and possible unbounded loss function under proper choice of sampling batch-size. The resultant sample complexity for finding $\epsilon$-stationary point reveals independent relationship with data size and parameter dimension, and thus our modeling and algorithms are suitable for large-scale applications.
Cite
Text
Yang et al. "A Stochastic Algorithm for Sinkhorn Distance-Regularized Distributionally Robust Optimization." NeurIPS 2024 Workshops: OPT, 2024.Markdown
[Yang et al. "A Stochastic Algorithm for Sinkhorn Distance-Regularized Distributionally Robust Optimization." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/yang2024neuripsw-stochastic/)BibTeX
@inproceedings{yang2024neuripsw-stochastic,
title = {{A Stochastic Algorithm for Sinkhorn Distance-Regularized Distributionally Robust Optimization}},
author = {Yang, Yufeng and Zhou, Yi and Lu, Zhaosong},
booktitle = {NeurIPS 2024 Workshops: OPT},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/yang2024neuripsw-stochastic/}
}