InfoBatch: Lossless Training Speed up by Unbiased Dynamic Data Pruning

Abstract

Data pruning aims to obtain lossless performances with less overall cost. A common approach is to filter out samples that make less contribution to the training. This could lead to gradient expectation bias compared to the original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve lossless training acceleration by unbiased dynamic data pruning. Specifically, InfoBatch randomly prunes a portion of less informative samples based on the loss distribution and rescales the gradients of the remaining samples to approximate the original gradient. As a plug-and-play and architecture-agnostic framework, InfoBatch consistently obtains lossless training results on classification, semantic segmentation, vision pertaining, and instruction fine-tuning tasks. On CIFAR10/100, ImageNet- 1K, and ADE20K, InfoBatch losslessly saves 40% overall cost. For pertaining MAE and diffusion model, InfoBatch can respectively save 24.8% and 27% cost. For LLaMA instruction fine-tuning, combining InfoBatch and the recent coreset selection method (DQ) can achieve 10 times acceleration. Our results encourage more exploration on the data efficiency aspect of large model training. Code is publicly available at NUS-HPC-AI-Lab/InfoBatch.

Cite

Text

Qin et al. "InfoBatch: Lossless Training Speed up by Unbiased Dynamic Data Pruning." International Conference on Learning Representations, 2024.

Markdown

[Qin et al. "InfoBatch: Lossless Training Speed up by Unbiased Dynamic Data Pruning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/qin2024iclr-infobatch/)

BibTeX

@inproceedings{qin2024iclr-infobatch,
  title     = {{InfoBatch: Lossless Training Speed up by Unbiased Dynamic Data Pruning}},
  author    = {Qin, Ziheng and Wang, Kai and Zheng, Zangwei and Gu, Jianyang and Peng, Xiangyu and Pan, xu Zhao and Zhou, Daquan and Shang, Lei and Sun, Baigui and Xie, Xuansong and You, Yang},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/qin2024iclr-infobatch/}
}