Querying Easily Flip-Flopped Samples for Deep Active Learning
Abstract
Active learning, a paradigm within machine learning, aims to select and query unlabeled data to enhance model performance strategically. A crucial selection strategy leverages the model's predictive uncertainty, reflecting the informativeness of a data point. While the sample's distance to the decision boundary intuitively measures predictive uncertainty, its computation becomes intractable for complex decision boundaries formed in multiclass classification tasks. This paper introduces the *least disagree metric* (LDM), the smallest probability of predicted label disagreement. We propose an asymptotically consistent estimator for LDM under mild assumptions. The estimator boasts computational efficiency and straightforward implementation for deep learning models using parameter perturbation. The LDM-based active learning algorithm queries unlabeled data with the smallest LDM, achieving state-of-the-art *overall* performance across various datasets and deep architectures, as demonstrated by the experimental results.
Cite
Text
Cho et al. "Querying Easily Flip-Flopped Samples for Deep Active Learning." International Conference on Learning Representations, 2024.Markdown
[Cho et al. "Querying Easily Flip-Flopped Samples for Deep Active Learning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/cho2024iclr-querying/)BibTeX
@inproceedings{cho2024iclr-querying,
title = {{Querying Easily Flip-Flopped Samples for Deep Active Learning}},
author = {Cho, Seong Jin and Kim, Gwangsu and Lee, Junghyun and Shin, Jinwoo and Yoo, Chang D.},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/cho2024iclr-querying/}
}