Automated Detection of Pre-Training Text in Black-Box LLMs

Abstract

Detecting whether a given text is a member in the pre-training data of Large Language Models (LLMs) is crucial for ensuring data privacy and copyright protection. Most existing methods rely on the LLM's hidden information (e.g., model parameters or token probabilities), making them ineffective in the black-box setting, where only input and output texts are accessible. Although some methods have been proposed for the black-box setting, they rely on massive manual efforts such as designing complicated questions or instructions. To address these issues, we propose VeilProbe, the first framework for automatically detecting LLMs' pre-training texts in a black-box setting without human intervention. VeilProbe utilizes a sequence-to-sequence mapping model to infer the latent mapping feature between the input text and the corresponding output suffix generated by the LLM. Then it performs the key token perturbations to obtain more distinguishable membership features. Additionally, considering real-world scenarios where the ground-truth training text samples are limited, a prototype-based membership classifier is introduced to alleviate the overfitting issue. Extensive evaluations on three widely used datasets demonstrate that our framework is effective and superior in the black-box setting.

Cite

Text

Hu et al. "Automated Detection of Pre-Training Text in Black-Box LLMs." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/44

Markdown

[Hu et al. "Automated Detection of Pre-Training Text in Black-Box LLMs." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/hu2025ijcai-automated/) doi:10.24963/IJCAI.2025/44

BibTeX

@inproceedings{hu2025ijcai-automated,
  title     = {{Automated Detection of Pre-Training Text in Black-Box LLMs}},
  author    = {Hu, Ruihan and Shang, Yu-Ming and Peng, Jiankun and Luo, Wei and Wang, Yazhe and Zhang, Xi},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {385-393},
  doi       = {10.24963/IJCAI.2025/44},
  url       = {https://mlanthology.org/ijcai/2025/hu2025ijcai-automated/}
}