Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data

Abstract

Large Language Models (LLMs) have been achieving competent performance on a wide range of downstream tasks, yet existing work shows that inference on structured data is challenging for LLMs. This is because LLMs need to either understand long structured data or select the most relevant evidence before inference, and both approaches are not trivial. This paper proposes a framework, Learning to Reduce, that fine-tunes a language model with On-Policy Learning to generate a reduced version of an input structured data. When compared to state-of-the-art LLMs like GPT-4, Learning to Reduce not only achieves outstanding performance in reducing the input, but shows generalizability on different datasets. We further show that the model fine-tuned with our framework helps LLMs better perform on table QA tasks especially when the context is longer.

Cite

Text

Lee et al. "Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data." ICML 2024 Workshops: LCFM, 2024.

Markdown

[Lee et al. "Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data." ICML 2024 Workshops: LCFM, 2024.](https://mlanthology.org/icmlw/2024/lee2024icmlw-learning/)

BibTeX

@inproceedings{lee2024icmlw-learning,
  title     = {{Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data}},
  author    = {Lee, Younghun and Kim, Sungchul and Rossi, Ryan A. and Yu, Tong and Chen, Xiang},
  booktitle = {ICML 2024 Workshops: LCFM},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/lee2024icmlw-learning/}
}