Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation
Abstract
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks. Existing works mostly utilize abundant unlabeled structured data to conduct unsupervised pre-training for task adaption, which fail to model the complex relationship between source structured data and target texts. Thus, we introduce self-training as a better few-shot learner than task-adaptive pre-training, which explicitly captures this relationship via pseudo-labeled data generated by the pre-trained model. To alleviate the side-effect of low-quality pseudo-labeled data during self-training, we propose a novel method called Curriculum-Based Self-Training (CBST) to effectively leverage unlabeled data in a rearranged order determined by the difficulty of text generation. Experimental results show that our method can outperform fine-tuning and task-adaptive pre-training methods, and achieve state-of-the-art performance in the few-shot setting of data-to-text generation.
Cite
Text
Ke et al. "Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/580Markdown
[Ke et al. "Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/ke2022ijcai-curriculum/) doi:10.24963/IJCAI.2022/580BibTeX
@inproceedings{ke2022ijcai-curriculum,
title = {{Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation}},
author = {Ke, Pei and Ji, Haozhe and Yang, Zhenyu and Huang, Yi and Feng, Junlan and Zhu, Xiaoyan and Huang, Minlie},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {4178-4184},
doi = {10.24963/IJCAI.2022/580},
url = {https://mlanthology.org/ijcai/2022/ke2022ijcai-curriculum/}
}