Training Dynamics of Learning 3D-Rotational Equivariance

Abstract

While data augmentation is widely used to train symmetry-agnostic models, it remains unclear how quickly and effectively they learn to respect symmetries. We investigate this by deriving a principled measure of equivariance error that, for convex losses, calculates the percent of total loss attributable to imperfections in learned symmetry. We focus our empirical investigation to 3D-rotation equivariance on high-dimensional molecular tasks (flow matching, force field prediction, denoising voxels) and find that models reduce equivariance error quickly to $\leq$2\% held-out loss within 1k-10k training steps, a result robust to model and dataset size. This happens because learning 3D-rotational equivariance is an easier learning task, with a smoother and better-conditioned loss landscape, than the main prediction task. For 3D rotations, the loss penalty for non-equivariant models is small throughout training, so they may achieve lower test loss than equivariant models per GPU-hour unless the equivariant ``efficiency gap'' is narrowed. We also experimentally and theoretically investigate the relationships between relative equivariance error, learning gradients, and model parameters.

Cite

Text

Shen et al. "Training Dynamics of Learning 3D-Rotational Equivariance." Transactions on Machine Learning Research, 2025.

Markdown

[Shen et al. "Training Dynamics of Learning 3D-Rotational Equivariance." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/shen2025tmlr-training/)

BibTeX

@article{shen2025tmlr-training,
  title     = {{Training Dynamics of Learning 3D-Rotational Equivariance}},
  author    = {Shen, Max W and Nowara, Ewa and Maser, Michael and Cho, Kyunghyun},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/shen2025tmlr-training/}
}