Report Cards: Qualitative Evaluation of LLMs Using Natural Language Summaries

Abstract

The generality and dynamic nature of large language models (LLMs) make it difficult for conventional quantitative benchmarks to accurately assess their capabilities. We propose Report Cards, which are human-interpretable, natural language summaries of model behavior for specific skills or topics. We develop a framework to evaluate Report Cards based on three criteria: specificity (ability to distinguish between models), faithfulness (accurate representation of model capabilities), and interpretability (clarity and relevance to humans). We also propose an iterative algorithm for generating Report Cards without human supervision. Through experimentation with popular LLMs, we demonstrate that Report Cards provide insights beyond traditional benchmarks and can help address the need for a more interpretable and holistic evaluation of LLMs.

Cite

Text

Yang et al. "Report Cards: Qualitative Evaluation of LLMs Using Natural Language Summaries." NeurIPS 2024 Workshops: SoLaR, 2024.

Markdown

[Yang et al. "Report Cards: Qualitative Evaluation of LLMs Using Natural Language Summaries." NeurIPS 2024 Workshops: SoLaR, 2024.](https://mlanthology.org/neuripsw/2024/yang2024neuripsw-report/)

BibTeX

@inproceedings{yang2024neuripsw-report,
  title     = {{Report Cards: Qualitative Evaluation of LLMs Using Natural Language Summaries}},
  author    = {Yang, Blair and Cui, Fuyang and Paster, Keiran and Ba, Jimmy and Vaezipoor, Pashootan and Pitis, Silviu and Zhang, Michael R.},
  booktitle = {NeurIPS 2024 Workshops: SoLaR},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/yang2024neuripsw-report/}
}