Do Efficient Transformers Really Save Computation?

Abstract

As transformer-based language models are trained on increasingly large datasets and with vast numbers of parameters, finding more efficient alternatives to the standard Transformer has become very valuable. While many efficient Transformers and Transformer alternatives have been proposed, none provide theoretical guarantees that they are a suitable replacement for the standard Transformer. This makes it challenging to identify when to use a specific model and what directions to prioritize for further investigation. In this paper, we aim to understand the capabilities and limitations of efficient Transformers, specifically the Sparse Transformer and the Linear Transformer. We focus on their reasoning capability as exhibited by Chain-of-Thought (CoT) prompts and follow previous works to model them as Dynamic Programming (DP) problems. Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size. Nonetheless, we identify a class of DP problems for which these models can be more efficient than the standard Transformer. We confirm our theoretical results through experiments on representative DP tasks, adding to the understanding of efficient Transformers’ practical strengths and weaknesses.

Cite

Text

Yang et al. "Do Efficient Transformers Really Save Computation?." International Conference on Machine Learning, 2024.

Markdown

[Yang et al. "Do Efficient Transformers Really Save Computation?." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/yang2024icml-efficient/)

BibTeX

@inproceedings{yang2024icml-efficient,
  title     = {{Do Efficient Transformers Really Save Computation?}},
  author    = {Yang, Kai and Ackermann, Jan and He, Zhenyu and Feng, Guhao and Zhang, Bohang and Feng, Yunzhen and Ye, Qiwei and He, Di and Wang, Liwei},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {55928-55947},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/yang2024icml-efficient/}
}