How Unlabeled Data Improve Generalization in Self-Training? a One-Hidden-Layer Theoretical Analysis

Abstract

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited. Despite empirical successes, its theoretical characterization remains elusive. To the best of our knowledge, this work establishes the first theoretical analysis for the known iterative self-training paradigm and formally proves the benefits of unlabeled data in both training convergence and generalization ability. To make our theoretical analysis feasible, we focus on the case of one-hidden-layer neural networks. However, theoretical understanding of iterative self-training is non-trivial even for a shallow neural network. One of the key challenges is that existing neural network landscape analysis built upon supervised learning no longer holds in the (semi-supervised) self-training paradigm. We address this challenge and prove that iterative self-training converges linearly with both convergence rate and generalization accuracy improved in the order of $1/\sqrt{M}$, where $M$ is the number of unlabeled samples. Extensive experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training.

Cite

Text

Zhang et al. "How Unlabeled Data Improve Generalization in Self-Training? a One-Hidden-Layer Theoretical Analysis." International Conference on Learning Representations, 2022.

Markdown

[Zhang et al. "How Unlabeled Data Improve Generalization in Self-Training? a One-Hidden-Layer Theoretical Analysis." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/zhang2022iclr-unlabeled/)

BibTeX

@inproceedings{zhang2022iclr-unlabeled,
  title     = {{How Unlabeled Data Improve Generalization in Self-Training? a One-Hidden-Layer Theoretical Analysis}},
  author    = {Zhang, Shuai and Wang, Meng and Liu, Sijia and Chen, Pin-Yu and Xiong, Jinjun},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/zhang2022iclr-unlabeled/}
}