Near-Optimal Adaptive Pool-Based Active Learning with General Loss

Abstract

We consider adaptive pool-based active learning in a Bayesian setting. We first analyze two com-monly used greedy active learning criteria: the maximum entropy criterion, which selects the example with the highest entropy, and the least confidence criterion, which selects the example whose most probable label has the least probabil-ity value. We show that unlike the non-adaptive case, the maximum entropy criterion is not able to achieve an approximation that is within a con-stant factor of optimal policy entropy. For the least confidence criterion, we show that it is able to achieve a constant factor approximation to the optimal version space reduction in a worst-case setting, where the probability of labelings that have not been eliminated is considered as the ver-sion space. We consider a third greedy active learning criterion, the Gibbs error criterion, and generalize it to handle arbitrary loss functions be-tween labelings. We analyze the properties of the generalization and its variants, and show that they perform well in practice. 1

Cite

Text

Cuong et al. "Near-Optimal Adaptive Pool-Based Active Learning with General Loss." Conference on Uncertainty in Artificial Intelligence, 2014.

Markdown

[Cuong et al. "Near-Optimal Adaptive Pool-Based Active Learning with General Loss." Conference on Uncertainty in Artificial Intelligence, 2014.](https://mlanthology.org/uai/2014/cuong2014uai-near/)

BibTeX

@inproceedings{cuong2014uai-near,
  title     = {{Near-Optimal Adaptive Pool-Based Active Learning with General Loss}},
  author    = {Cuong, Nguyen Viet and Lee, Wee Sun and Ye, Nan},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2014},
  pages     = {122-131},
  url       = {https://mlanthology.org/uai/2014/cuong2014uai-near/}
}