Self-Improvement in Language Models: The Sharpening Mechanism

Abstract

Recent work in language modeling has raised the possibility of “self-improvement,” where an LLM evaluates and refines its own generations to achieve higher performance without external feedback. It is impossible for this self-improvement to create information that is not already in the model, so why should we expect that this will lead to improved capabilities? We offer a new theoretical perspective on the capabilities of self-improvement through a lens we refer to as “sharpening.” Motivated by the observation that language models are often better at verifying response quality than they are at generating correct responses, we formalize self-improvement as using the model itself as a verifier during post-training in order to ‘sharpen’ the model to one placing large mass on high-quality sequences, thereby amortizing the expensive inference-time computation of generating good sequences. We begin by introducing a new statistical framework for sharpening in which the learner has sample access to a pre-trained base policy. Then, we analyze two natural families of self improvement algorithms based on SFT and RLHF. We find that (i) the SFT-based approach is minimax optimal whenever the initial model has sufficient coverage, but (ii) the RLHF-based approach can improve over SFT-based self- improvement by leveraging online exploration, bypassing the need for coverage. We view these findings as a starting point toward a foundational understanding that can guide the design and evaluation of self-improvement algorithms.

Cite

Text

Huang et al. "Self-Improvement in Language Models: The Sharpening Mechanism." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Huang et al. "Self-Improvement in Language Models: The Sharpening Mechanism." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/huang2024neuripsw-selfimprovement/)

BibTeX

@inproceedings{huang2024neuripsw-selfimprovement,
  title     = {{Self-Improvement in Language Models: The Sharpening Mechanism}},
  author    = {Huang, Audrey and Block, Adam and Foster, Dylan J and Rohatgi, Dhruv and Zhang, Cyril and Simchowitz, Max and Ash, Jordan T. and Krishnamurthy, Akshay},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/huang2024neuripsw-selfimprovement/}
}