You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization

Abstract

In predict and optimize, machine learning models are trained to predict parameters of optimization problems using task performance as the objective. A key challenge is computing the Jacobian of the solution with respect to its parameters. While linear problems typically use approximations due to a zero or undefined Jacobian, non-linear convex problems often utilize the exact Jacobian. This paper demonstrates that the zero-gradient issue also occurs in the non-linear case and introduces a smoothing technique which, combined with quadratic approximation and projection distance regularization, solves the zero-gradient problem. Experiments on a portfolio optimization problem confirm the method's efficiency.

Cite

Text

Veviurko et al. "You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.

Markdown

[Veviurko et al. "You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization." ICML 2024 Workshops: Differentiable_Almost_Everything, 2024.](https://mlanthology.org/icmlw/2024/veviurko2024icmlw-you/)

BibTeX

@inproceedings{veviurko2024icmlw-you,
  title     = {{You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization}},
  author    = {Veviurko, Grigorii and Boehmer, Wendelin and de Weerdt, Mathijs},
  booktitle = {ICML 2024 Workshops: Differentiable_Almost_Everything},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/veviurko2024icmlw-you/}
}