HySem: A Context Length Optimized LLM Pipeline for Unstructured Tabular Extraction

Abstract

Regulatory compliance reporting in the pharmaceutical industry relies on detailed tables, but these are often under-utilized beyond compliance due to their unstructured format and arbitrary content. Extracting and semantically representing tabular data is challenging due to diverse table presentations. Large Language Models (LLMs) demonstrate substantial potential for semantic representation, yet they encounter challenges related to accuracy and context size limitations, which are crucial considerations for the industry applications. We introduce HySem, a pipeline that employs a novel context length optimization technique to generate accurate semantic JSON representations from HTML tables. This approach utilizes a custom fine-tuned model specifically designed for cost- and privacy-sensitive small and medium pharmaceutical enterprises. Running on commodity hardware and leveraging open-source models, HySem surpasses its peer open-source models in accuracy and provides competitive performance when benchmarked against OpenAI GPT-4o and effectively addresses context length limitations, which is a crucial factor for supporting larger tables.

Cite

Text

Pp and Iyer. "HySem: A Context Length Optimized LLM Pipeline for Unstructured Tabular Extraction." NeurIPS 2024 Workshops: TRL, 2024.

Markdown

[Pp and Iyer. "HySem: A Context Length Optimized LLM Pipeline for Unstructured Tabular Extraction." NeurIPS 2024 Workshops: TRL, 2024.](https://mlanthology.org/neuripsw/2024/pp2024neuripsw-hysem/)

BibTeX

@inproceedings{pp2024neuripsw-hysem,
  title     = {{HySem: A Context Length Optimized LLM Pipeline for Unstructured Tabular Extraction}},
  author    = {Pp, Narayanan and Iyer, Anantharaman Palacode Narayana},
  booktitle = {NeurIPS 2024 Workshops: TRL},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/pp2024neuripsw-hysem/}
}