Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Abstract

Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call \method, where we (1) generate samples from the model and filter them using binary feedback, (2) fine-tune the model on these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, we find that \method{} scales favorably with model size and significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with feedback can reduce dependence on human-generated data.

Cite

Text

Singh et al. "Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models." Transactions on Machine Learning Research, 2024.

Markdown

[Singh et al. "Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/singh2024tmlr-beyond/)

BibTeX

@article{singh2024tmlr-beyond,
  title     = {{Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models}},
  author    = {Singh, Avi and Co-Reyes, John D and Agarwal, Rishabh and Anand, Ankesh and Patil, Piyush and Garcia, Xavier and Liu, Peter J and Harrison, James and Lee, Jaehoon and Xu, Kelvin and Parisi, Aaron T and Kumar, Abhishek and Alemi, Alexander A and Rizkowsky, Alex and Nova, Azade and Adlam, Ben and Bohnet, Bernd and Elsayed, Gamaleldin Fathy and Sedghi, Hanie and Mordatch, Igor and Simpson, Isabelle and Gur, Izzeddin and Snoek, Jasper and Pennington, Jeffrey and Hron, Jiri and Kenealy, Kathleen and Swersky, Kevin and Mahajan, Kshiteej and Culp, Laura A and Xiao, Lechao and Bileschi, Maxwell and Constant, Noah and Novak, Roman and Liu, Rosanne and Warkentin, Tris and Bansal, Yamini and Dyer, Ethan and Neyshabur, Behnam and Sohl-Dickstein, Jascha and Fiedel, Noah},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/singh2024tmlr-beyond/}
}