StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs

Abstract

As Large Language Models (LLMs) become integral to software development workflows, their ability to generate structured outputs has become critically important. We introduce $\textbf{StructEval}$, a comprehensive benchmark for evaluating LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks, StructEval systematically evaluates structural fidelity across diverse formats through two paradigms: $\textbf{1)}$ generation tasks, producing structured output from natural language prompts, and $\textbf{2)}$ conversion tasks, translating between structured formats. Our benchmark encompasses 18 formats and 44 types of task, with novel metrics for format adherence and structural correctness. Results reveal significant performance gaps—even state-of-the-art models like o1-mini achieve only $75.58$ average score, with open-source alternatives lagging approximately $10$ points behind. We find generation tasks more challenging than conversion tasks, and producing correct visual content more difficult than generating text-only structures.

Cite

Text

Yang et al. "StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs." Transactions on Machine Learning Research, 2026.

Markdown

[Yang et al. "StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/yang2026tmlr-structeval/)

BibTeX

@article{yang2026tmlr-structeval,
  title     = {{StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs}},
  author    = {Yang, Jialin and Jiang, Dongfu and He, Tony and Siu, Sherman and Zhang, Yuxuan and Liao, Disen and Li, Zhuofeng and Zeng, Huaye and Jia, Yiming and Wang, Haozhe and Schneider, Benjamin and Ruan, Chi and Ma, Wentao and Lyu, Zhiheng and Wang, Yifei and Lu, Yi and Do, Quy Duc and Jiang, Ziyan and Nie, Ping and Chen, Wenhu},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/yang2026tmlr-structeval/}
}