Antidistillation Sampling

Abstract

Frontier models that generate extended reasoning traces inadvertently produce token sequences that can facilitate model distillation. Recognizing this vulnerability, model owners may seek sampling strategies that limit the effectiveness of distillation without compromising model performance. *Antidistillation sampling* provides exactly this capability. By strategically modifying a model's next-token probability distribution, antidistillation sampling poisons reasoning traces, rendering them significantly less effective for distillation while preserving the model's utility.

Cite

Text

Savani et al. "Antidistillation Sampling." Advances in Neural Information Processing Systems, 2025.

Markdown

[Savani et al. "Antidistillation Sampling." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/savani2025neurips-antidistillation/)

BibTeX

@inproceedings{savani2025neurips-antidistillation,
  title     = {{Antidistillation Sampling}},
  author    = {Savani, Yash and Trockman, Asher and Feng, Zhili and Xu, Yixuan Even and Schwarzschild, Avi and Robey, Alexander and Finzi, Marc Anton and Kolter, J Zico},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/savani2025neurips-antidistillation/}
}