ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
Abstract
Research on Large Language Models (LLMs) has recently witnessed an increasing interest in extending models' context size to better capture dependencies within long documents. While benchmarks have been proposed to assess long-range abilities, existing efforts primarily considered generic tasks that are not necessarily aligned with real-world applications. In contrast, our work proposes a new benchmark for long-context LLMs focused on a practical meeting assistant scenario. In this scenario, the long contexts consist of transcripts obtained by automatic speech recognition, presenting unique challenges for LLMs due to the inherent noisiness and oral nature of such data. Our benchmark, named ELITR-Bench, augments the existing ELITR corpus' transcripts with 271 manually crafted questions and their ground-truth answers. Our experiments with recent long-context LLMs on ELITR-Bench highlight a gap between open-source and proprietary models, especially when questions are asked sequentially within a conversation.
Cite
Text
Thonet et al. "ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models." ICML 2024 Workshops: LCFM, 2024.Markdown
[Thonet et al. "ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models." ICML 2024 Workshops: LCFM, 2024.](https://mlanthology.org/icmlw/2024/thonet2024icmlw-elitrbench/)BibTeX
@inproceedings{thonet2024icmlw-elitrbench,
title = {{ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models}},
author = {Thonet, Thibaut and Rozen, Jos and Besacier, Laurent},
booktitle = {ICML 2024 Workshops: LCFM},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/thonet2024icmlw-elitrbench/}
}