Human Alignment: How Much We Adapt to LLMs?

Abstract

Large Language Models (LLMs) are becoming a common part of our daily communication, yet most studies focus on improving these models, with fewer examining how they influence our behavior. Using a cooperative word game in which players aim to agree on a shared word, we investigate how people adapt their linguistic strategies when paired with either an LLM or another human. Our findings show that interactions with LLMs lead to more self-referential language and distinct alignment patterns, with users’ beliefs about their partners further modulating these effects. These findings highlight the reciprocal influence of human–AI dialogue and raise important questions about the long-term implications of embedding LLMs in everyday communication.

Cite

Text

Tanguy et al. "Human Alignment: How Much We Adapt to LLMs?." ICLR 2025 Workshops: Bi-Align, 2025.

Markdown

[Tanguy et al. "Human Alignment: How Much We Adapt to LLMs?." ICLR 2025 Workshops: Bi-Align, 2025.](https://mlanthology.org/iclrw/2025/tanguy2025iclrw-human/)

BibTeX

@inproceedings{tanguy2025iclrw-human,
  title     = {{Human Alignment: How Much We Adapt to LLMs?}},
  author    = {Tanguy, Cazalets and Janssens, Ruben and Belpaeme, Tony and Dambre, Joni},
  booktitle = {ICLR 2025 Workshops: Bi-Align},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/tanguy2025iclrw-human/}
}