HeGTa: Leveraging Heterogeneous Graph-Enhanced Large Language Models for Few-Shot Complex Table Understanding

Abstract

Table Understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HeGTa, a heterogeneous graph (HG)-enhanced large language model (LLM) designed for few-shot TU tasks. This framework aligns structural table semantics with the LLM's parametric knowledge through soft prompts and instruction tuning. It also addresses complex tables with a multi-task pre-training scheme, incorporating three novel multi-granularity self-supervised HG pre-text tasks. We empirically demonstrate the effectiveness of HeGTa, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.

Cite

Text

Jin et al. "HeGTa: Leveraging Heterogeneous Graph-Enhanced Large Language Models for Few-Shot Complex Table Understanding." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34606

Markdown

[Jin et al. "HeGTa: Leveraging Heterogeneous Graph-Enhanced Large Language Models for Few-Shot Complex Table Understanding." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/jin2025aaai-hegta/) doi:10.1609/AAAI.V39I23.34606

BibTeX

@inproceedings{jin2025aaai-hegta,
  title     = {{HeGTa: Leveraging Heterogeneous Graph-Enhanced Large Language Models for Few-Shot Complex Table Understanding}},
  author    = {Jin, Rihui and Li, Yu and Qi, Guilin and Hu, Nan and Li, Yuan-Fang and Chen, Jiaoyan and Wang, Jianan and Chen, Yongrui and Min, Dehai and Bi, Sheng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24294-24302},
  doi       = {10.1609/AAAI.V39I23.34606},
  url       = {https://mlanthology.org/aaai/2025/jin2025aaai-hegta/}
}