MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?

Abstract

Large foundation models face challenges in acquiring transferable, structured thinking abilities, especially when supervised with rigid templates or crowd-annotated instruction datasets. Unlike prior approaches, we focus on a thinking-centric data synthesis paradigm that enables models to evolve through self-generated, cognitively guided data. We propose MindGYM, a structured and scalable framework for question synthesis, composed of: (1) Cognitive Thinking Process Injection, which infuses high-level reasoning objectives to shape the model’s synthesis behavior; (2) Seed Single-Hop Question Synthesis, generating atomic questions from diverse semantic types to encourage broader thinking; and (3) Challenging Multi-Hop QA Synthesis, composing more complex multi-hop questions based on QA seeds for deeper reasoning. Detailed analysis shows that synthetic data generated by our method achieves 16.7% higher average quality and 67.91% lower quality variance compared to baseline sources, highlighting that both high-quality and self-contained data are essential for effective, thinking-oriented fine-tuning. MindGYM improves performance on six reasoning benchmarks, achieving gains of up to 16% on MathVision using only 400 data samples, and generalizable improvements across different model sizes and architectures. MindGYM underscores the viability of self-challenging mechanisms in refining large model capabilities while minimizing human intervention and resource demands. Code and data are released to promote data-centric research into self-evolving foundation models driven by their internal reasoning capabilities.

Cite

Text

Xu et al. "MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xu et al. "MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xu2025neurips-mindgym/)

BibTeX

@inproceedings{xu2025neurips-mindgym,
  title     = {{MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?}},
  author    = {Xu, Zhe and Chen, Daoyuan and Ling, Zhenqing and Li, Yaliang and Shen, Ying},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xu2025neurips-mindgym/}
}