Dataset Distillation with Convexified Implicit Gradients

Abstract

We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art. To this end, we first formulate dataset distillation as a bi-level optimization problem. Then, we show how implicit gradients can be effectively used to compute meta-gradient updates. We further equip the algorithm with a convexified approximation that corresponds to learning on top of a frozen finite-width neural tangent kernel. Finally, we improve bias in implicit gradients by parameterizing the neural network to enable analytical computation of final-layer parameters given the body parameters. RCIG establishes the new state-of-the-art on a diverse series of dataset distillation tasks. Notably, with one image per class, on resized ImageNet, RCIG sees on average a 108% improvement over the previous state-of-the-art distillation algorithm. Similarly, we observed a 66% gain over SOTA on Tiny-ImageNet and 37% on CIFAR-100.

Cite

Text

Loo et al. "Dataset Distillation with Convexified Implicit Gradients." International Conference on Machine Learning, 2023.

Markdown

[Loo et al. "Dataset Distillation with Convexified Implicit Gradients." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/loo2023icml-dataset/)

BibTeX

@inproceedings{loo2023icml-dataset,
  title     = {{Dataset Distillation with Convexified Implicit Gradients}},
  author    = {Loo, Noel and Hasani, Ramin and Lechner, Mathias and Rus, Daniela},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {22649-22674},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/loo2023icml-dataset/}
}