Co-Training with Insufficient Views

Abstract

Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on co-training are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on co-training when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, co-training is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between co-training and co-regularization.

Cite

Text

Wang and Zhou. "Co-Training with Insufficient Views." Proceedings of the 5th Asian Conference on Machine Learning, 2013.

Markdown

[Wang and Zhou. "Co-Training with Insufficient Views." Proceedings of the 5th Asian Conference on Machine Learning, 2013.](https://mlanthology.org/acml/2013/wang2013acml-cotraining/)

BibTeX

@inproceedings{wang2013acml-cotraining,
  title     = {{Co-Training with Insufficient Views}},
  author    = {Wang, Wei and Zhou, Zhi-Hua},
  booktitle = {Proceedings of the 5th Asian Conference on Machine Learning},
  year      = {2013},
  pages     = {467-482},
  volume    = {29},
  url       = {https://mlanthology.org/acml/2013/wang2013acml-cotraining/}
}