Solving Hidden Monotone Variational Inequalities with Surrogate Losses
Abstract
Deep learning has proven to be effective in a wide variety of loss minimization problems. However, many applications of interest, like minimizing projected Bellman error and min-max optimization, cannot be modelled as minimizing a scalar loss function but instead correspond to solving a variational inequality (VI) problem. This difference in setting has caused many practical challenges as naive gradient-based approaches from supervised learning tend to diverge and cycle in the VI case. In this work, we propose a principled surrogate-based approach compatible with deep learning to solve VIs. We propose a surrogate-based approach that is principled in the VI setting and compatible with deep learning. We show that our approach has three main benefits: (1) it guarantees linear convergence under sufficient descent in the surrogate when hidden monotone structure is present (e.g. convex-concave in with respect to model predictions), (2) it provides a unifying perspective of existing methods, and (3) is amenable to existing deep learning optimizers like ADAM.
Cite
Text
D'Orazio et al. "Solving Hidden Monotone Variational Inequalities with Surrogate Losses." NeurIPS 2024 Workshops: OPT, 2024.Markdown
[D'Orazio et al. "Solving Hidden Monotone Variational Inequalities with Surrogate Losses." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/dorazio2024neuripsw-solving/)BibTeX
@inproceedings{dorazio2024neuripsw-solving,
title = {{Solving Hidden Monotone Variational Inequalities with Surrogate Losses}},
author = {D'Orazio, Ryan and Vucetic, Danilo and Liu, Zichu and Kim, Junhyung Lyle and Mitliagkas, Ioannis and Gidel, Gauthier},
booktitle = {NeurIPS 2024 Workshops: OPT},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/dorazio2024neuripsw-solving/}
}