Neighborhood-Regularized Self-Training for Learning with Few Labels
Abstract
Training deep neural networks (DNNs) with limited supervision has been a popular research topic as it can significantly alleviate the annotation burden. Self-training has been successfully applied in semi-supervised learning tasks, but one drawback of self-training is that it is vulnerable to the label noise from incorrect pseudo labels. Inspired by the fact that samples with similar labels tend to share similar representations, we develop a neighborhood-based sample selection approach to tackle the issue of noisy pseudo labels. We further stabilize self-training via aggregating the predictions from different rounds during sample selection. Experiments on eight tasks show that our proposed method outperforms the strongest self-training baseline with 1.83% and 2.51% performance gain for text and graph datasets on average. Our further analysis demonstrates that our proposed data selection strategy reduces the noise of pseudo labels by 36.8% and saves 57.3% of the time when compared with the best baseline. Our code and appendices will be uploaded to: https://github.com/ritaranx/NeST.
Cite
Text
Xu et al. "Neighborhood-Regularized Self-Training for Learning with Few Labels." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I9.26260Markdown
[Xu et al. "Neighborhood-Regularized Self-Training for Learning with Few Labels." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/xu2023aaai-neighborhood/) doi:10.1609/AAAI.V37I9.26260BibTeX
@inproceedings{xu2023aaai-neighborhood,
title = {{Neighborhood-Regularized Self-Training for Learning with Few Labels}},
author = {Xu, Ran and Yu, Yue and Cui, Hejie and Kan, Xuan and Zhu, Yanqiao and Ho, Joyce C. and Zhang, Chao and Yang, Carl},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {10611-10619},
doi = {10.1609/AAAI.V37I9.26260},
url = {https://mlanthology.org/aaai/2023/xu2023aaai-neighborhood/}
}