Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs

Abstract

Large language models (LLMs) are increasingly applied for tabular tasks using in-context learning. The prompt representation for a table may play a role in the LLMs ability to process the table. Inspired by prior work, we generate a collection of self-supervised structural tasks (e.g. navigate to a cell and row; transpose the table) and evaluate the performance differences when using 8 formats. In contrast to past work, we introduce 8 noise operations inspired by real-world messy data and adversarial inputs, and show that such operations can impact LLM performance across formats for different structural understanding tasks.

Cite

Text

Singha et al. "Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs." NeurIPS 2023 Workshops: TRL, 2023.

Markdown

[Singha et al. "Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs." NeurIPS 2023 Workshops: TRL, 2023.](https://mlanthology.org/neuripsw/2023/singha2023neuripsw-tabular/)

BibTeX

@inproceedings{singha2023neuripsw-tabular,
  title     = {{Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs}},
  author    = {Singha, Ananya and Cambronero, José and Gulwani, Sumit and Le, Vu and Parnin, Chris},
  booktitle = {NeurIPS 2023 Workshops: TRL},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/singha2023neuripsw-tabular/}
}