Graph Convolutions Enrich the Self-Attention in Transformers!
Abstract
Transformers, renowned for their self-attention mechanism, have achieved state-of-the-art performance across various tasks in natural language processing, computer vision, time-series modeling, etc. However, one of the challenges with deep Transformer models is the oversmoothing problem, where representations across layers converge to indistinguishable values, leading to significant performance degradation. We interpret the original self-attention as a simple graph filter and redesign it from a graph signal processing (GSP) perspective. We propose a graph-filter-based self-attention (GFSA) to learn a general yet effective one, whose complexity, however, is slightly larger than that of the original self-attention mechanism. We demonstrate that GFSA improves the performance of Transformers in various fields, including computer vision, natural language processing, graph-level tasks, speech recognition, and code classification.
Cite
Text
Choi et al. "Graph Convolutions Enrich the Self-Attention in Transformers!." Neural Information Processing Systems, 2024. doi:10.52202/079017-1676Markdown
[Choi et al. "Graph Convolutions Enrich the Self-Attention in Transformers!." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/choi2024neurips-graph/) doi:10.52202/079017-1676BibTeX
@inproceedings{choi2024neurips-graph,
title = {{Graph Convolutions Enrich the Self-Attention in Transformers!}},
author = {Choi, Jeongwhan and Wi, Hyowon and Kim, Jayoung and Shin, Yehjin and Lee, Kookjin and Trask, Nathaniel and Park, Noseong},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1676},
url = {https://mlanthology.org/neurips/2024/choi2024neurips-graph/}
}