Efficient Knowledge Injection in LLMs via Self-Distillation

Abstract

In many practical applications, large language models (LLMs) need to acquire new knowledge not present in their pre-training data. Efficiently leveraging this knowledge usually relies on supervised fine-tuning or retrieval-augmented generation (RAG). Although RAG has emerged as the industry standard for knowledge injection, fine-tuning has not yet achieved comparable success. This paper proposes utilizing prompt distillation, a self-distillation-based method previously explored primarily for style alignment and instruction tuning, to internalize new factual knowledge from free-form documents. Unlike prior methods, our approach requires neither larger teacher models nor structured knowledge formats. Across multiple LLM sizes and model families, we show that prompt distillation outperforms standard supervised fine-tuning and can even surpass RAG. We analyze the key factors contributing to prompt distillation's effectiveness and examine how it scales.

Cite

Text

Kujanpää et al. "Efficient Knowledge Injection in LLMs via Self-Distillation." Transactions on Machine Learning Research, 2025.

Markdown

[Kujanpää et al. "Efficient Knowledge Injection in LLMs via Self-Distillation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kujanpaa2025tmlr-efficient/)

BibTeX

@article{kujanpaa2025tmlr-efficient,
  title     = {{Efficient Knowledge Injection in LLMs via Self-Distillation}},
  author    = {Kujanpää, Kalle and Marttinen, Pekka and Valpola, Harri and Ilin, Alexander},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/kujanpaa2025tmlr-efficient/}
}