Logic and the 2-Simplicial Transformer
Abstract
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.
Cite
Text
Clift et al. "Logic and the 2-Simplicial Transformer." International Conference on Learning Representations, 2020.Markdown
[Clift et al. "Logic and the 2-Simplicial Transformer." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/clift2020iclr-logic/)BibTeX
@inproceedings{clift2020iclr-logic,
title = {{Logic and the 2-Simplicial Transformer}},
author = {Clift, James and Doryn, Dmitry and Murfet, Daniel and Wallbridge, James},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/clift2020iclr-logic/}
}