Solving Quadratic Programs via Deep Unrolled Douglas-Rachford Splitting
Abstract
Convex quadratic programs (QPs) are fundamental to numerous applications, including finance, engineering, and energy systems. Among the various methods for solving them, the Douglas-Rachford (DR) splitting algorithm is notable for its robust convergence properties. Concurrently, the emerging field of Learning-to-Optimize offers promising avenues for enhancing algorithmic performance, with algorithm unrolling receiving considerable attention due to its computational efficiency and interpretability. In this work, we propose an approach that unrolls a modified DR splitting algorithm to efficiently learn solutions for convex QPs. Specifically, we introduce a tailored DR splitting algorithm that replaces the computationally expensive linear system-solving step with a simplified gradient-based update, while retaining convergence guarantees. Consequently, we unroll the resulting DR splitting method and present a well-crafted neural network architecture to predict QP solutions. Our method achieves up to 50% reductions in iteration counts and 40% in solve time across benchmarks on both synthetic and real-world QP datasets, demonstrating its scalability and superior performance in enhancing computational efficiency across varying sizes.
Cite
Text
Xiong et al. "Solving Quadratic Programs via Deep Unrolled Douglas-Rachford Splitting." Transactions on Machine Learning Research, 2025.Markdown
[Xiong et al. "Solving Quadratic Programs via Deep Unrolled Douglas-Rachford Splitting." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/xiong2025tmlr-solving/)BibTeX
@article{xiong2025tmlr-solving,
title = {{Solving Quadratic Programs via Deep Unrolled Douglas-Rachford Splitting}},
author = {Xiong, Jinxin and Gao, Xi and Yang, Linxin and Xue, Jiang and Luo, Xiaodong and Wang, Akang},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/xiong2025tmlr-solving/}
}