Model Provenance Testing for Large Language Models

Abstract

Large language models are increasingly customized through fine-tuning and other adaptations, creating challenges in enforcing licensing terms and managing downstream impacts such as protecting intellectual property or identifying vulnerabilities. We address this challenge by developing a framework for testing model provenance. Our approach is based on the key observation that real-world model derivations preserve significant similarities in model outputs that can be detected through statistical analysis. Using only black-box access to models, we employ multiple hypothesis testing to compare model similarities against a baseline established by unrelated models. On two comprehensive real-world benchmarks spanning models from 30M to 4B parameters and comprising over 600 models, our tester achieves 90-95% precision and 80-90% recall in identifying derived models. These results demonstrate the viability of systematic provenance verification in production environments even when only API access is available.

Cite

Text

Nikolic et al. "Model Provenance Testing for Large Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Nikolic et al. "Model Provenance Testing for Large Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/nikolic2025neurips-model/)

BibTeX

@inproceedings{nikolic2025neurips-model,
  title     = {{Model Provenance Testing for Large Language Models}},
  author    = {Nikolic, Ivica and Baluta, Teodora and Saxena, Prateek},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/nikolic2025neurips-model/}
}