O$n$ Learning Deep O($n$)-Equivariant Hyperspheres
Abstract
In this paper, we utilize hyperspheres and regular $n$-simplexes and propose an approach to learning deep features equivariant under the transformations of $n$D reflections and rotations, encompassed by the powerful group of O$(n)$. Namely, we propose O$(n)$-equivariant neurons with spherical decision surfaces that generalize to any dimension $n$, which we call Deep Equivariant Hyperspheres. We demonstrate how to combine them in a network that directly operates on the basis of the input points and propose an invariant operator based on the relation between two points and a sphere, which as we show, turns out to be a Gram matrix. Using synthetic and real-world data in $n$D, we experimentally verify our theoretical contributions and find that our approach is superior to the competing methods for O$(n)$-equivariant benchmark datasets (classification and regression), demonstrating a favorable speed/performance trade-off. The code is available on GitHub.
Cite
Text
Melnyk et al. "O$n$ Learning Deep O($n$)-Equivariant Hyperspheres." International Conference on Machine Learning, 2024.Markdown
[Melnyk et al. "O$n$ Learning Deep O($n$)-Equivariant Hyperspheres." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/melnyk2024icml-learning/)BibTeX
@inproceedings{melnyk2024icml-learning,
title = {{O$n$ Learning Deep O($n$)-Equivariant Hyperspheres}},
author = {Melnyk, Pavlo and Felsberg, Michael and Wadenbäck, Mårten and Robinson, Andreas and Le, Cuong},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {35324-35339},
volume = {235},
url = {https://mlanthology.org/icml/2024/melnyk2024icml-learning/}
}