Towards Understanding Distilled Reasoning Models: A Representational Approach
Abstract
In this paper, we investigate how model distillation impacts the development of reasoning features in large language models (LLMs). To explore this, we train a crosscoder on Qwen-series models and their fine-tuned variants. Our results suggest that the crosscoder learns features corresponding to various types of reasoning, including self-reflection and computation verification. Moreover, we observe that distilled models contain unique reasoning feature directions, which could be used to steer the model into over-thinking or incisive-thinking mode. In particular, we perform analysis on four specific reasoning categories: (a) self-reflection, (b) deductive reasoning, (c) alternative reasoning, and (d) contrastive reasoning. Finally, we examine the changes in feature geometry resulting from the distillation process and find indications that larger distilled models may develop more structured representations, which correlate with enhanced distillation performance. By providing insights into how distillation modifies the model, our study contributes to enhancing the transparency and reliability of AI systems.
Cite
Text
Baek and Tegmark. "Towards Understanding Distilled Reasoning Models: A Representational Approach." ICLR 2025 Workshops: BuildingTrust, 2025.Markdown
[Baek and Tegmark. "Towards Understanding Distilled Reasoning Models: A Representational Approach." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/baek2025iclrw-understanding/)BibTeX
@inproceedings{baek2025iclrw-understanding,
title = {{Towards Understanding Distilled Reasoning Models: A Representational Approach}},
author = {Baek, David D. and Tegmark, Max},
booktitle = {ICLR 2025 Workshops: BuildingTrust},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/baek2025iclrw-understanding/}
}