Mitigating Over-Smoothing in Transformers via Regularized Nonlocal Functionals

Abstract

Transformers have achieved remarkable success in a wide range of natural language processing and computer vision applications. However, the representation capacity of a deep transformer model is degraded due to the over-smoothing issue in which the token representations become identical when the model's depth grows. In this work, we show that self-attention layers in transformers minimize a functional which promotes smoothness, thereby causing token uniformity. We then propose a novel regularizer that penalizes the norm of the difference between the smooth output tokens from self-attention and the input tokens to preserve the fidelity of the tokens. Minimizing the resulting regularized energy functional, we derive the Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO), a novel class of transformer models that can mitigate the over-smoothing issue. We empirically demonstrate the advantages of NeuTRENO over the baseline transformers and state-of-the-art methods in reducing the over-smoothing of token representations on various practical tasks, including object classification, image segmentation, and language modeling.

Cite

Text

Nguyen et al. "Mitigating Over-Smoothing in Transformers via Regularized Nonlocal Functionals." Neural Information Processing Systems, 2023.

Markdown

[Nguyen et al. "Mitigating Over-Smoothing in Transformers via Regularized Nonlocal Functionals." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/nguyen2023neurips-mitigating/)

BibTeX

@inproceedings{nguyen2023neurips-mitigating,
  title     = {{Mitigating Over-Smoothing in Transformers via Regularized Nonlocal Functionals}},
  author    = {Nguyen, Tam and Nguyen, Tan and Baraniuk, Richard},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/nguyen2023neurips-mitigating/}
}