SEAL: Suite for Evaluating API-Use of LLMs

Abstract

Large language models (LLMs) have limitations in handling tasks that require real-time access to external APIs. While several benchmarks like ToolBench and APIGen have been developed to assess LLMs' API-use capabilities, they often suffer from issues such as lack of generalizability, limited multi-step reasoning coverage, and instability due to real-time API fluctuations. In this paper, we introduce SEAL, an end-to-end testbed designed to evaluate LLMs in real-world API usage. SEAL standardizes existing benchmarks, integrates an agent system for testing API retrieval and planning, and addresses the instability of real-time APIs by introducing a GPT-4-powered API simulator with caching for deterministic evaluations. Our testbed provides a comprehensive evaluation pipeline that covers API retrieval, API calls, and final responses, offering a reliable framework for structured performance comparison in diverse real-world scenarios. SEAL is publicly available at https://github.com/EmergenceAI/seal-api-llms, with ongoing updates for new benchmarks.

Cite

Text

Kim et al. "SEAL: Suite for Evaluating API-Use of LLMs." NeurIPS 2024 Workshops: OWA, 2024.

Markdown

[Kim et al. "SEAL: Suite for Evaluating API-Use of LLMs." NeurIPS 2024 Workshops: OWA, 2024.](https://mlanthology.org/neuripsw/2024/kim2024neuripsw-seal/)

BibTeX

@inproceedings{kim2024neuripsw-seal,
  title     = {{SEAL: Suite for Evaluating API-Use of LLMs}},
  author    = {Kim, Woojeong and Jagmohan, Ashish and Vempaty, Aditya},
  booktitle = {NeurIPS 2024 Workshops: OWA},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/kim2024neuripsw-seal/}
}