Learning and Transferring Sparse Contextual Bigrams with Linear Transformers

Abstract

Transformers have achieved significant success in natural language modeling because of their exceptional capabilities to combine contextual information and global knowledge, yet their theoretical basis remains unclear. In this paper, we first propose Sparse Contextual Bigram (SCB), a natural extension to the classical bigram model, where the generation of the next token depends on a sparse set of earlier positions determined by the last token. We investigate the training dynamics and sample complexity of learning SCB using a one-layer linear transformer with a gradient-based algorithm. We show that when trained from scratch, the training process can be split into an initial sample-intensive stage where the correlation is boosted from zero to a nontrivial value, followed by a more sample-efficient stage of further improvement. Additionally, we prove that, provided a nontrivial correlation between the downstream and pretraining tasks, finetuning from a pretrained model allows us to bypass the initial sample-intensive stage. We also empirically demonstrate that our algorithm can outperform SGD in our setting.

Cite

Text

Ren et al. "Learning and Transferring Sparse Contextual Bigrams with Linear Transformers." Neural Information Processing Systems, 2024. doi:10.52202/079017-0642

Markdown

[Ren et al. "Learning and Transferring Sparse Contextual Bigrams with Linear Transformers." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ren2024neurips-learning/) doi:10.52202/079017-0642

BibTeX

@inproceedings{ren2024neurips-learning,
  title     = {{Learning and Transferring Sparse Contextual Bigrams with Linear Transformers}},
  author    = {Ren, Yunwei and Wang, Zixuan and Lee, Jason D.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0642},
  url       = {https://mlanthology.org/neurips/2024/ren2024neurips-learning/}
}