Effective Data Distillation for Tabular Datasets (Student Abstract)

Abstract

Data distillation is a technique of reducing a large dataset into a smaller dataset. The smaller dataset can then be used to train a model which can perform comparably to a model trained on the full dataset. Past works have examined this approach for image datasets, focusing on neural networks as target models. However, tabular datasets pose new challenges not seen in images. A sample in tabular dataset is a one dimensional vector unlike the two (or three) dimensional pixel grid of images, and Non-NN models such as XGBoost can often outperform neural network (NN) based models. Our contribution in this work is two-fold: 1) We show in our work that data distillation methods from images do not translate directly to tabular data; 2) We propose a new distillation method that consistently outperforms the baseline for multiple different models, including non-NN models such as XGBoost.

Cite

Text

Kang et al. "Effective Data Distillation for Tabular Datasets (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30460

Markdown

[Kang et al. "Effective Data Distillation for Tabular Datasets (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kang2024aaai-effective/) doi:10.1609/AAAI.V38I21.30460

BibTeX

@inproceedings{kang2024aaai-effective,
  title     = {{Effective Data Distillation for Tabular Datasets (Student Abstract)}},
  author    = {Kang, Inwon and Ram, Parikshit and Zhou, Yi and Samulowitz, Horst and Seneviratne, Oshani},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23533-23534},
  doi       = {10.1609/AAAI.V38I21.30460},
  url       = {https://mlanthology.org/aaai/2024/kang2024aaai-effective/}
}