The Remarkable Robustness of LLMs: Stages of Inference?

Abstract

We demonstrate and investigate the remarkable robustness of Large Language Models by deleting and swapping adjacent layers. We find that deleting and swapping interventions retain 72-95\% of the original model's prediction accuracy without fine-tuning, whereas models with more layers exhibit more robustness. Based on the results of the layer-wise intervention and further experiments, we hypothesize the existence of four universal stages of inference across eight different models: detokenization, feature engineering, prediction ensembling, and residual sharpening. The first stage integrates local information, lifting raw token representations into higher-level contextual representations. Next is the iterative refinement of task and entity-specific features. Then, the second half of the model begins with a phase transition, where hidden representations align more with the vocabulary space due to specialized model components. Finally, the last layer sharpens the following token distribution by eliminating obsolete features that add noise to the prediction.

Cite

Text

Lad et al. "The Remarkable Robustness of LLMs: Stages of Inference?." ICML 2024 Workshops: MI, 2024.

Markdown

[Lad et al. "The Remarkable Robustness of LLMs: Stages of Inference?." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/lad2024icmlw-remarkable/)

BibTeX

@inproceedings{lad2024icmlw-remarkable,
  title     = {{The Remarkable Robustness of LLMs: Stages of Inference?}},
  author    = {Lad, Vedang and Gurnee, Wes and Tegmark, Max},
  booktitle = {ICML 2024 Workshops: MI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/lad2024icmlw-remarkable/}
}