Looped Transformers Are Better at Learning Learning Algorithms

Abstract

Transformers have demonstrated effectiveness in in-context solving data-fitting problems from various (latent) models, as reported by Garg et al. (2022). However, the absence of an inherent iterative structure in the transformer architecture presents a challenge in emulating the iterative algorithms, which are commonly employed in traditional machine learning methods. To address this, we propose the utilization of looped transformer architecture and its associated training methodology, with the aim of incorporating iterative characteristics into the transformer architectures. Experimental results suggest that the looped transformer achieves performance comparable to the standard transformer in solving various data-fitting problems, while utilizing less than 10% of the parameter count.

Cite

Text

Yang et al. "Looped Transformers Are Better at Learning Learning Algorithms." International Conference on Learning Representations, 2024.

Markdown

[Yang et al. "Looped Transformers Are Better at Learning Learning Algorithms." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/yang2024iclr-looped/)

BibTeX

@inproceedings{yang2024iclr-looped,
  title     = {{Looped Transformers Are Better at Learning Learning Algorithms}},
  author    = {Yang, Liu and Lee, Kangwook and Nowak, Robert D and Papailiopoulos, Dimitris},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/yang2024iclr-looped/}
}