Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting
Abstract
In many applications, a combinatorial problem must be repeatedly solved with similar, but distinct parameters. Yet, the parameters $w$ are not directly observed; only contextual data $d$ that correlates with $w$ is available. It is tempting to use a neural network to predict $w$ given $d$. However, training such a model requires reconciling the discrete nature of combinatorial optimization with the gradient-based frameworks used to train neural networks. We study the case where the problem in question is an Integer Linear Program (ILP). We propose applying a three-operator splitting technique, also known as Davis-Yin splitting (DYS), to the quadratically regularized continuous relaxation of the ILP. We prove that the resulting scheme is compatible with the recently introduced Jacobian-free backpropagation (JFB). Our experiments on two representative ILPs: the shortest path problem and the knapsack problem, demonstrate that this combination---DYS on the forward pass, JFB on the backward pass---yields a scheme which scales more effectively to high-dimensional problems than existing schemes.
Cite
Text
McKenzie et al. "Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting." Transactions on Machine Learning Research, 2024.Markdown
[McKenzie et al. "Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/mckenzie2024tmlr-differentiating/)BibTeX
@article{mckenzie2024tmlr-differentiating,
title = {{Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting}},
author = {McKenzie, Daniel and Heaton, Howard and Fung, Samy Wu},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/mckenzie2024tmlr-differentiating/}
}