N-Gram Induction Heads for In-Context RL: Improving Stability and Reducing Data Needs
Abstract
In-context learning allows models like transformers to adapt to new tasks from a few examples without updating their weights, a desirable trait for reinforcement learning (RL). However, existing in-context RL methods, such as Algorithm Distillation (AD), demand large, carefully curated datasets and can be unstable and costly to train due to the transient nature of in-context learning abilities. In this work we integrated the n-gram induction heads into transformers for in-context RL. By incorporating these n-gram attention patterns, we significantly reduced the data required for generalization — up to 27 times fewer transitions in the Key-to-Door environment — and eased the training process by making models less sensitive to hyperparameters. Our approach not only matches but often surpasses the performance of AD, demonstrating the potential of n-gram induction heads to enhance the efficiency of in-context RL.
Cite
Text
Zisman et al. "N-Gram Induction Heads for In-Context RL: Improving Stability and Reducing Data Needs." NeurIPS 2024 Workshops: AFM, 2024.Markdown
[Zisman et al. "N-Gram Induction Heads for In-Context RL: Improving Stability and Reducing Data Needs." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/zisman2024neuripsw-ngram/)BibTeX
@inproceedings{zisman2024neuripsw-ngram,
title = {{N-Gram Induction Heads for In-Context RL: Improving Stability and Reducing Data Needs}},
author = {Zisman, Ilya and Nikulin, Alexander and Polubarov, Andrei and Nikita, Lyubaykin and Kurenkov, Vladislav},
booktitle = {NeurIPS 2024 Workshops: AFM},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/zisman2024neuripsw-ngram/}
}