Pay Attention to What Matters
Abstract
Despite the remarkable success of Large Language Models (LLMs), they still exhibit a limited capability to align their outputs to the user instructions. In this work, we introduce a simple and effective method, which we name as GUIDE, that mechanistically increases attention scores in instruction tokens. To support this operation, we present Influence, a novel metric that highlights how the user's instructions propagate with transformer layers and impact the LLM output. Our results show that GUIDE improves the accuracy of following certain instructions 29.4% to 60.4 %, outperforming natural prompting alternatives.
Cite
Text
Silva et al. "Pay Attention to What Matters." NeurIPS 2024 Workshops: MINT, 2024.Markdown
[Silva et al. "Pay Attention to What Matters." NeurIPS 2024 Workshops: MINT, 2024.](https://mlanthology.org/neuripsw/2024/silva2024neuripsw-pay/)BibTeX
@inproceedings{silva2024neuripsw-pay,
title = {{Pay Attention to What Matters}},
author = {Silva, Pedro Luiz and Ayed, Fadhel and De Domenico, Antonio and Maatouk, Ali},
booktitle = {NeurIPS 2024 Workshops: MINT},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/silva2024neuripsw-pay/}
}