Prototypical Fine-Tuning: Towards Robust Performance Under Varying Data Sizes

Abstract

In this paper, we move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the number of data points and the model's inherent attributes. Moreover, we propose four principles for effective prototype fine-tuning towards the optimal solution. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.

Cite

Text

Jin et al. "Prototypical Fine-Tuning: Towards Robust Performance Under Varying Data Sizes." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I11.26524

Markdown

[Jin et al. "Prototypical Fine-Tuning: Towards Robust Performance Under Varying Data Sizes." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/jin2023aaai-prototypical/) doi:10.1609/AAAI.V37I11.26524

BibTeX

@inproceedings{jin2023aaai-prototypical,
  title     = {{Prototypical Fine-Tuning: Towards Robust Performance Under Varying Data Sizes}},
  author    = {Jin, Yiqiao and Wang, Xiting and Hao, Yaru and Sun, Yizhou and Xie, Xing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {12968-12976},
  doi       = {10.1609/AAAI.V37I11.26524},
  url       = {https://mlanthology.org/aaai/2023/jin2023aaai-prototypical/}
}