Benchmarking Table Comprehension in the Wild

Abstract

Large Language Models (LLMs), while being increasingly dominant on a myriad of knowledge-intensive activities, have only had limited success understanding lengthy table-text mixtures, such as academic papers and financial reports. Recent advances of long-context LLMs have opened up new possibilities for this field. Nonetheless, we identify two roadblocks: 1. Prior benchmarks of table question answering (TableQA) have focused on isolated tables without context, making it hard to evaluate models in real-world scenarios. 2. Prior benchmarks have focused on some narrow skill sets of table comprehension such as table recognition, data manipulation/calculation , table summarization etc., while a skilled human employs those skills collectively. In this work, we introduce TableQuest, a new benchmark designed to evaluate the holistic table comprehension capabilities of LLMs in the natural table-rich context of financial reports. We employ a rigorous data processing and filtering procedure to ensure that the question-answer pairs are logical, reasonable, and diverse. We experiment with 7 state-of-the-art models, and find that despite reasonable accuracy in locating facts, they often falter when required to execute more sophisticated reasoning or multi-step calculations. We conclude with a qualitative study of the failure modes and discuss the challenges of constructing a challenging benchmark. We make the evaluation data and results of this study publicly available to facilitate research in this field.

Cite

Text

Pan et al. "Benchmarking Table Comprehension in the Wild." NeurIPS 2024 Workshops: TRL, 2024.

Markdown

[Pan et al. "Benchmarking Table Comprehension in the Wild." NeurIPS 2024 Workshops: TRL, 2024.](https://mlanthology.org/neuripsw/2024/pan2024neuripsw-benchmarking/)

BibTeX

@inproceedings{pan2024neuripsw-benchmarking,
  title     = {{Benchmarking Table Comprehension in the Wild}},
  author    = {Pan, Yikang and Zhu, Yi and Xie, Rand and Liu, Yizhi},
  booktitle = {NeurIPS 2024 Workshops: TRL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/pan2024neuripsw-benchmarking/}
}