Faster Optimization on Sparse Graphs via Neural Reparametrization
Abstract
Many scientific problems involve energy minimization on sparse graphs, including such as heat diffusion in solids and synchronization. These problems often experience slow convergence, particularly when the graph is random, as seen in glassy systems. We show that graph neural networks (GNN) can be used to significantly speed up such optimization problems. Our idea is to represent the state of each node as the output of a graph neural network. We show the benefit of this GNN reparametrization in experiments solving heat diffusion and synchronization of nonlinear oscillators. When optimizing using gradient descent, we show that this GNN reparametrization has the effect of a quasi-Newton’s method.
Cite
Text
Both et al. "Faster Optimization on Sparse Graphs via Neural Reparametrization." Proceedings of the Third Learning on Graphs Conference, 2025.Markdown
[Both et al. "Faster Optimization on Sparse Graphs via Neural Reparametrization." Proceedings of the Third Learning on Graphs Conference, 2025.](https://mlanthology.org/log/2025/both2025log-faster/)BibTeX
@inproceedings{both2025log-faster,
title = {{Faster Optimization on Sparse Graphs via Neural Reparametrization}},
author = {Both, Csaba and Dehmamy, Nima and Long, Jianzhi and Yu, Rose},
booktitle = {Proceedings of the Third Learning on Graphs Conference},
year = {2025},
pages = {26:1-26:21},
volume = {269},
url = {https://mlanthology.org/log/2025/both2025log-faster/}
}