Humans Linguistically Align to Their Conversational Partners, and Language Models Should Too

Abstract

Humankind has honed its language system over thousands of years to engage in statistical learning and form predictions about upcoming input, often based on properties of or prior conversational experience with a specific conversational partner. Large language models, however, do not adapt their language in a user-specific manner. We argue that AI and ML researchers and developers should not ignore this critical component of human language processing, but instead, incorporate it into LLM development, and that doing so will improve LLM conversational performance, as well as users’ perceptions of models on dimensions such as accuracy and task success.

Cite

Text

Ostrand and Berger. "Humans Linguistically Align to Their Conversational Partners, and Language Models Should Too." ICML 2024 Workshops: LLMs_and_Cognition, 2024.

Markdown

[Ostrand and Berger. "Humans Linguistically Align to Their Conversational Partners, and Language Models Should Too." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/ostrand2024icmlw-humans/)

BibTeX

@inproceedings{ostrand2024icmlw-humans,
  title     = {{Humans Linguistically Align to Their Conversational Partners, and Language Models Should Too}},
  author    = {Ostrand, Rachel and Berger, Sara E},
  booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/ostrand2024icmlw-humans/}
}