Distilling Datasets into Less than One Image
Abstract
Dataset distillation aims to compress a dataset into a much smaller one so that a model trained on the distilled dataset achieves high accuracy. Current methods frame this as maximizing the distilled classification accuracy for a budget of K distilled images-per-class, where K is a positive integer. In this paper, we push the boundaries of dataset distillation, compressing the dataset into less than an image-per-class. It is important to realize that the meaningful quantity is not the number of distilled images-per-class but the number of distilled pixels-per-dataset. We therefore, propose Poster Dataset Distillation (PoDD), a new approach that distills the entire original dataset into a single poster. The poster approach motivates new technical solutions for creating training images and learnable labels. Our method can achieve comparable or better performance with less than an image-per-class compared to existing methods that use one image-per-class. Specifically, our method establishes a new state-of-the-art performance on CIFAR-10, CIFAR-100, and CUB200 on the well established 1 IPC benchmark, while using as little as 0.3 images-per-class.
Cite
Text
Shul et al. "Distilling Datasets into Less than One Image." Transactions on Machine Learning Research, 2025.Markdown
[Shul et al. "Distilling Datasets into Less than One Image." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/shul2025tmlr-distilling/)BibTeX
@article{shul2025tmlr-distilling,
title = {{Distilling Datasets into Less than One Image}},
author = {Shul, Asaf and Horwitz, Eliahu and Hoshen, Yedid},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/shul2025tmlr-distilling/}
}