Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant
Abstract
Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach for geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models achieve competitive and state-of-the-art accuracy on relevant particle physics tasks, while being $4\times$ faster and using $10\times$ fewer FLOPs.
Cite
Text
Spinner et al. "Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant." Advances in Neural Information Processing Systems, 2025.Markdown
[Spinner et al. "Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/spinner2025neurips-lorentz/)BibTeX
@inproceedings{spinner2025neurips-lorentz,
title = {{Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant}},
author = {Spinner, Jonas and Favaro, Luigi and Lippmann, Peter and Pitz, Sebastian and Gerhartz, Gerrit and Plehn, Tilman and Hamprecht, Fred A.},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/spinner2025neurips-lorentz/}
}