How Do Llamas Process Multilingual Text? a Latent Exploration Through Activation Patching
Abstract
A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing Llama-2's forward pass during a word translation task. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean over latents across different language pairs does not impair the model's performance in translating the concept. Our results provide evidence for the existence of language-agnostic concept representations within the model.
Cite
Text
Dumas et al. "How Do Llamas Process Multilingual Text? a Latent Exploration Through Activation Patching." ICML 2024 Workshops: MI, 2024.Markdown
[Dumas et al. "How Do Llamas Process Multilingual Text? a Latent Exploration Through Activation Patching." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/dumas2024icmlw-llamas/)BibTeX
@inproceedings{dumas2024icmlw-llamas,
title = {{How Do Llamas Process Multilingual Text? a Latent Exploration Through Activation Patching}},
author = {Dumas, Clément and Veselovsky, Veniamin and Monea, Giovanni and West, Robert and Wendler, Chris},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/dumas2024icmlw-llamas/}
}