Towards Flexible, Efficient, and Effective Tensor Product Networks
Abstract
Geometric graph neural networks have showcased exceptional performance in modelling geometric data. These models rely heavily on equivariant operations, encompassing vital techniques such as scalarization and the Clebsch-Gordan tensor product. However, tensor-product-based architectures face substantial computational challenges as the representation order increases, significantly limiting their versatility. Moreover, the interpretability of interactions between steerable components remains elusive. In contrast, scalarization methods benefit from cost-efficient invariant scalar operations while still being capable of outperforming certain tensor-product-based models. To bridge the gap between these approaches, we introduce a conceptual framework that emphasizes the potential flexibility in designing tensor product networks. To provide guidance for efficient framework design and gain deeper insights into steerable components, we conduct a preliminary investigation by pruning tensor product interactions. This approach enables us to directly assess the redundancy and significance of steerable components, paving the way for efficient and effective designs.
Cite
Text
Wang et al. "Towards Flexible, Efficient, and Effective Tensor Product Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.Markdown
[Wang et al. "Towards Flexible, Efficient, and Effective Tensor Product Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/wang2023neuripsw-flexible/)BibTeX
@inproceedings{wang2023neuripsw-flexible,
title = {{Towards Flexible, Efficient, and Effective Tensor Product Networks}},
author = {Wang, Nanxiang and Lin, Chen and Bronstein, Michael and Torr, Philip},
booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/wang2023neuripsw-flexible/}
}