Spatial Attention Kinetic Networks with E(n)-Equivariance
Abstract
Neural networks that are equivariant to rotations, translations, reflections, and permutations on $n$-dimensional geometric space have shown promise in physical modeling for tasks such as accurately but inexpensively modeling complex potential energy surfaces to guiding the sampling of complex dynamical systems or forecasting their time evolution. Current state-of-the-art methods employ spherical harmonics to encode higher-order interactions among particles, which are computationally expensive. In this paper, we propose a simple alternative functional form that uses neurally parametrized linear combinations of edge vectors to achieve equivariance while still universally approximating node environments. Incorporating this insight, we design \emph{spatial attention kinetic networks} with E(n)-equivariance, or SAKE, which are competitive in many-body system modeling tasks while being significantly faster.
Cite
Text
Wang and Chodera. "Spatial Attention Kinetic Networks with E(n)-Equivariance." International Conference on Learning Representations, 2023.Markdown
[Wang and Chodera. "Spatial Attention Kinetic Networks with E(n)-Equivariance." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/wang2023iclr-spatial/)BibTeX
@inproceedings{wang2023iclr-spatial,
title = {{Spatial Attention Kinetic Networks with E(n)-Equivariance}},
author = {Wang, Yuanqing and Chodera, John},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/wang2023iclr-spatial/}
}