KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss
Abstract
Many physical phenomena are described by Hamiltonian mechanics using an energy function (Hamiltonian). Recently, the Hamiltonian neural network, which approximates the Hamiltonian by a neural network, and its extensions have attracted much attention. This is a very powerful method, but theoretical studies are limited. In this study, by combining the statistical learning theory and KAM theory, we provide a theoretical analysis of the behavior of Hamiltonian neural networks when the learning error is not completely zero. A Hamiltonian neural network with non-zero errors can be considered as a perturbation from the true dynamics, and the perturbation theory of the Hamilton equation is widely known as KAM theory. To apply KAM theory, we provide a generalization error bound for Hamiltonian neural networks by deriving an estimate of the covering number of the gradient of the multi-layer perceptron, which is the key ingredient of the model. This error bound gives a sup-norm bound on the Hamiltonian that is required in the application of KAM theory.
Cite
Text
Chen et al. "KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I6.20582Markdown
[Chen et al. "KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/chen2022aaai-kam/) doi:10.1609/AAAI.V36I6.20582BibTeX
@inproceedings{chen2022aaai-kam,
title = {{KAM Theory Meets Statistical Learning Theory: Hamiltonian Neural Networks with Non-Zero Training Loss}},
author = {Chen, Yuhan and Matsubara, Takashi and Yaguchi, Takaharu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {6322-6332},
doi = {10.1609/AAAI.V36I6.20582},
url = {https://mlanthology.org/aaai/2022/chen2022aaai-kam/}
}