Multi-View Active Learning in the Non-Realizable Case

Abstract

The sample complexity of active learning under the realizability assumption has been well-studied. The realizability assumption, however, rarely holds in practice. In this paper, we theoretically characterize the sample complexity of active learning in the non-realizable case under multi-view setting. We prove that, with unbounded Tsybakov noise, the sample complexity of multi-view active learning can be $\widetilde{O}(\log \frac{1}{\epsilon})$, contrasting to single-view setting where the polynomial improvement is the best possible achievement. We also prove that in general multi-view setting the sample complexity of active learning with unbounded Tsybakov noise is $\widetilde{O}(\frac{1}{\epsilon})$, where the order of $1/\epsilon$ is independent of the parameter in Tsybakov noise, contrasting to previous polynomial bounds where the order of $1/\epsilon$ is related to the parameter in Tsybakov noise.

Cite

Text

Wang and Zhou. "Multi-View Active Learning in the Non-Realizable Case." Neural Information Processing Systems, 2010.

Markdown

[Wang and Zhou. "Multi-View Active Learning in the Non-Realizable Case." Neural Information Processing Systems, 2010.](https://mlanthology.org/neurips/2010/wang2010neurips-multiview/)

BibTeX

@inproceedings{wang2010neurips-multiview,
  title     = {{Multi-View Active Learning in the Non-Realizable Case}},
  author    = {Wang, Wei and Zhou, Zhi-Hua},
  booktitle = {Neural Information Processing Systems},
  year      = {2010},
  pages     = {2388-2396},
  url       = {https://mlanthology.org/neurips/2010/wang2010neurips-multiview/}
}