Efficient Active Learning with Abstention

Abstract

The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable ``noise-seeking'' behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve \emph{constant} label complexity and deal with model misspecification.

Cite

Text

Zhu and Nowak. "Efficient Active Learning with Abstention." Neural Information Processing Systems, 2022.

Markdown

[Zhu and Nowak. "Efficient Active Learning with Abstention." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhu2022neurips-efficient/)

BibTeX

@inproceedings{zhu2022neurips-efficient,
  title     = {{Efficient Active Learning with Abstention}},
  author    = {Zhu, Yinglun and Nowak, Robert},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhu2022neurips-efficient/}
}