A Gradient Flow Modification to Improve Learning from Differentiable Quantum Simulators
Abstract
Propagating gradients through differentiable simulators allows to improve the training of deep learning architectures. We study an example from quantum physics that, at first glance, seems not to benefit from such gradients. Our analysis shows the problem is rooted in a mismatch between the specific form of loss functions used in quantum physics and its gradients; the gradient can vanish for non-equal states. We propose to add a scaling term to fix this problematic gradient flow and regain the benefits of gradient-based optimization. We chose two experiments on the Schroedinger equation, a prediction and a control task, to demonstrate the potential of our method.
Cite
Text
Schnell and Thuerey. "A Gradient Flow Modification to Improve Learning from Differentiable Quantum Simulators." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.Markdown
[Schnell and Thuerey. "A Gradient Flow Modification to Improve Learning from Differentiable Quantum Simulators." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/schnell2023icmlw-gradient/)BibTeX
@inproceedings{schnell2023icmlw-gradient,
title = {{A Gradient Flow Modification to Improve Learning from Differentiable Quantum Simulators}},
author = {Schnell, Patrick and Thuerey, Nils},
booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/schnell2023icmlw-gradient/}
}