Trainable Transformer in Transformer

Abstract

Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose a new efficient construction, Transformer in Transformer (in short, TINT), that allows a transformer to simulate and fine-tune more complex models during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TINT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TINT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TINT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TINT for a OPT-125M model improves performance by 4 − 16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TINT is included.

Cite

Text

Panigrahi et al. "Trainable Transformer in Transformer." International Conference on Machine Learning, 2024.

Markdown

[Panigrahi et al. "Trainable Transformer in Transformer." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/panigrahi2024icml-trainable/)

BibTeX

@inproceedings{panigrahi2024icml-trainable,
  title     = {{Trainable Transformer in Transformer}},
  author    = {Panigrahi, Abhishek and Malladi, Sadhika and Xia, Mengzhou and Arora, Sanjeev},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {39448-39492},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/panigrahi2024icml-trainable/}
}