Scaling up Dataset Distillation to ImageNet-1k with Constant Memory

Abstract

Dataset Distillation is a newly emerging area that aims to distill large datasets into much smaller and highly informative synthetic ones to accelerate training and reduce storage. Among various dataset distillation methods, trajectory-matching-based methods (MTT) have achieved SOTA performance in many tasks, e.g., on CIFAR-10/100. However, due to exorbitant memory consumption when unrolling optimization through SGD steps, MTT fails to scale to large-scale datasets such as ImageNet-1K. Can we scale this SOTA method to ImageNet-1K and does its effectiveness on CIFAR transfer to ImageNet-1K? To answer these questions, we first propose a procedure to exactly compute the unrolled gradient with constant memory complexity, which allows us to scale MTT to ImageNet-1K seamlessly with $\sim 6$x reduction in memory footprint. We further discover that it is challenging for MTT to handle datasets with a large number of classes, and propose a novel soft label assignment that drastically improves its convergence. The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5.9% accuracy drop against full dataset training) while utilizing only 4.2% of the number of data points - an 18.2% absolute gain over prior SOTA.

Cite

Text

Cui et al. "Scaling up Dataset Distillation to ImageNet-1k with Constant Memory." International Conference on Machine Learning, 2023.

Markdown

[Cui et al. "Scaling up Dataset Distillation to ImageNet-1k with Constant Memory." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/cui2023icml-scaling/)

BibTeX

@inproceedings{cui2023icml-scaling,
  title     = {{Scaling up Dataset Distillation to ImageNet-1k with Constant Memory}},
  author    = {Cui, Justin and Wang, Ruochen and Si, Si and Hsieh, Cho-Jui},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {6565-6590},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/cui2023icml-scaling/}
}