Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory

Abstract

Synthetic data has gained attention for training large language models, but poor-quality data can harm performance (see, e.g., Shumailov et al. (2023); Seddik et al. (2024)). A potential solution is data pruning, which retains only high-quality data based on a score function (human or machine feedback). Previous work Feng et al. (2024) analyzed models trained on synthetic data as sample size increases. We extend this by using random matrix theory to derive the performance of a binary classifier trained on a mix of real and pruned synthetic data in a high dimensional setting. Our findings identify conditions where synthetic data could improve performance, focusing on the quality of the generative model and verification strategy. We also show a smooth phase transition in synthetic label noise, contrasting with prior sharp behavior in infinite sample limits. Experiments with toy models and large language models validate our theoretical results.

Cite

Text

El Firdoussi et al. "Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory." International Conference on Learning Representations, 2025.

Markdown

[El Firdoussi et al. "Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/firdoussi2025iclr-maximizing/)

BibTeX

@inproceedings{firdoussi2025iclr-maximizing,
  title     = {{Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory}},
  author    = {El Firdoussi, Aymane and Seddik, Mohamed El Amine and Hayou, Soufiane and Alami, Reda and Alzubaidi, Ahmed and Hacid, Hakim},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/firdoussi2025iclr-maximizing/}
}