Geometric Algebra Transformers for Large 3D Meshes via Cross-Attention
Abstract
Surface and volume meshes of 3D anatomical structures are widely used in biomedical engineering and medicine. The advent of machine learning enabled viable applications which come with the unique challenge of applying deep neural networks to large 3D meshes. In this work, we scale the recently introduced geometric algebra transformers (GATr) to meshes with hundreds of thousands of vertices by projection to a coarser set of vertices via cross-attention. The resulting neural network inherits GATr's equivariance under rotation, translation and reflection, which are desirable properties when dealing with 3D objects.
Cite
Text
Suk et al. "Geometric Algebra Transformers for Large 3D Meshes via Cross-Attention." ICML 2024 Workshops: GRaM, 2024.Markdown
[Suk et al. "Geometric Algebra Transformers for Large 3D Meshes via Cross-Attention." ICML 2024 Workshops: GRaM, 2024.](https://mlanthology.org/icmlw/2024/suk2024icmlw-geometric/)BibTeX
@inproceedings{suk2024icmlw-geometric,
title = {{Geometric Algebra Transformers for Large 3D Meshes via Cross-Attention}},
author = {Suk, Julian and De Haan, Pim and Imre, Baris and Wolterink, Jelmer M.},
booktitle = {ICML 2024 Workshops: GRaM},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/suk2024icmlw-geometric/}
}