Benign Overfitting in Single-Head Attention

Abstract

The phenomenon of benign overfitting, where a trained neural network perfectly fits noisy training data but still achieves near-optimal test performance, has been extensively studied in recent years for linear models and fully-connected/convolutional networks. In this work, we study benign overfitting in a single-head softmax attention model, which is the fundamental building block of Transformers. We prove that under appropriate conditions, the model exhibits benign overfitting in a classification setting already after two steps of gradient descent. Moreover, we show conditions where a minimum-norm/maximum-margin interpolator exhibits benign overfitting. We study how the overfitting behavior depends on the signal-to-noise ratio (SNR) of the data distribution, namely, the ratio between norms of signal and noise tokens, and prove that a sufficiently large SNR is both necessary and sufficient for benign overfitting.

Cite

Text

Magen et al. "Benign Overfitting in Single-Head Attention." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Magen et al. "Benign Overfitting in Single-Head Attention." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/magen2024neuripsw-benign/)

BibTeX

@inproceedings{magen2024neuripsw-benign,
  title     = {{Benign Overfitting in Single-Head Attention}},
  author    = {Magen, Roey and Shang, Shuning and Xu, Zhiwei and Frei, Spencer and Hu, Wei and Vardi, Gal},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/magen2024neuripsw-benign/}
}