Resolving Ambiguity Through Personalization in LLM Chat Systems
Abstract
This paper explores LLMs' ability to perform consistent personalized generation incorporating user feedback. We first show that it is challenging for LLMs to (1) utilize feedback consistently in long conversations, (2) reason about multiple partial or conflicting feedback, and (3) adapt to changing preferences within a conversation. These challenges show that input information selection is crucial for improving multi-turn LLM performance. We propose a novel solution of building a **CoreSet** of past conversations, a principled approach of personalization. In addition to addressing the long history, conflict, and preference change challenges, coresets are an effective way to reduce input tokens, making these services more cost-effective. We show that our coreset algorithm improves upon state-of-the-art methods on both synthetic and real-world ambiguity datasets compared to memory and personalization benchmarks.
Cite
Text
Sun et al. "Resolving Ambiguity Through Personalization in LLM Chat Systems." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.Markdown
[Sun et al. "Resolving Ambiguity Through Personalization in LLM Chat Systems." ICLR 2025 Workshops: LLM_Reason_and_Plan, 2025.](https://mlanthology.org/iclrw/2025/sun2025iclrw-resolving/)BibTeX
@inproceedings{sun2025iclrw-resolving,
title = {{Resolving Ambiguity Through Personalization in LLM Chat Systems}},
author = {Sun, Sophia Huiwen and Sankararaman, Abishek and Narayanaswamy, Balakrishnan Murali},
booktitle = {ICLR 2025 Workshops: LLM_Reason_and_Plan},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/sun2025iclrw-resolving/}
}