Optimizing Dataset Distillation Using DATM: Adjusting Learning Rate and Upper Bound

Abstract

In this technical report, we describe our solution on dataset distillation, which places 4th in the ECCV 2024 First Dataset Distillation Workshop. Dataset distillation involves creating a small set of synthetic images that can achieve performance comparable to training on the entire dataset. The challenge requires us to generate 10 synthetic images per class for each dataset, using 50K CIFAR-100 images and 100K Tiny ImageNet images as training data. We adopt the DATM [ 4 ] method, a trajectory matching-based technique, and make several modifications and fine-tuning adjustments to ensure it performed properly within our constrained GPU environment. The synthetic images generated from CIFAR-100 achieves a test accuracy of 38.72%, while those from Tiny ImageNet attained a test accuracy of 15.79%.

Cite

Text

Kim et al. "Optimizing Dataset Distillation Using DATM: Adjusting Learning Rate and Upper Bound." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-93806-1_8

Markdown

[Kim et al. "Optimizing Dataset Distillation Using DATM: Adjusting Learning Rate and Upper Bound." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/kim2024eccvw-optimizing/) doi:10.1007/978-3-031-93806-1_8

BibTeX

@inproceedings{kim2024eccvw-optimizing,
  title     = {{Optimizing Dataset Distillation Using DATM: Adjusting Learning Rate and Upper Bound}},
  author    = {Kim, Minjun and Cho, Junhee and Kwon, Junseok},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {95-101},
  doi       = {10.1007/978-3-031-93806-1_8},
  url       = {https://mlanthology.org/eccvw/2024/kim2024eccvw-optimizing/}
}