Efficient NLP Model Finetuning via Multistage Data Filtering

Abstract

As model finetuning is central to the modern NLP, we set to maximize its efficiency. Motivated by redundancy in training examples and the sheer sizes of pretrained models, we exploit a key opportunity: training only on important data. To this end, we set to filter training examples in a streaming fashion, in tandem with training the target model. Our key techniques are two: (1) automatically determine a training loss threshold for skipping backward training passes; (2) run a meta predictor for further skipping forward training passes. We integrate the above techniques in a holistic, three-stage training pro- cess. On a diverse set of benchmarks, our method reduces the required training examples by up to 5.3× and training time by up to 6.8×, while only seeing minor accuracy degradation. Our method is effective even for training one epoch, where each training example is encountered only once. It is simple to implement and is compatible with the existing finetuning techniques. Code is available at: https://github.com/xo28/efficient-NLP-multistage-training

Cite

Text

Ouyang et al. "Efficient NLP Model Finetuning via Multistage Data Filtering." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/455

Markdown

[Ouyang et al. "Efficient NLP Model Finetuning via Multistage Data Filtering." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/ouyang2023ijcai-efficient/) doi:10.24963/IJCAI.2023/455

BibTeX

@inproceedings{ouyang2023ijcai-efficient,
  title     = {{Efficient NLP Model Finetuning via Multistage Data Filtering}},
  author    = {Ouyang, Xu and Ansari, Shahina Mohd Azam and Lin, Felix Xiaozhu and Ji, Yangfeng},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {4091-4099},
  doi       = {10.24963/IJCAI.2023/455},
  url       = {https://mlanthology.org/ijcai/2023/ouyang2023ijcai-efficient/}
}