Debiased Active Learning with Variational Gradient Rectifier
Abstract
The strategy of selecting ``most informative'' hard samples in active learning has proven a boon for alleviating the challenges of few-shot learning and costly data annotation in deep learning. However, this very preference towards hard samples engenders bias issues, thereby impeding the full potential of active learning. It has witnessed an increasing trend to mitigate this stubborn problem, yet most neglect the quantification of bias itself and the direct rectification of dynamically evolving biases. Revisiting the bias issue, this paper presents an active learning approach based on the Variational Gradient Rectifier (VaGeRy). First, we employ variational methods to quantify bias at the level of latent state representations. Then, harnessing historical training dynamics, we introduce Uncertainty Consistency Regularization and Fluctuation Restriction, which asynchronously iterate to rectify gradient backpropagation. Extensive experiments demonstrate that our proposed methodology effectively counteracts bias phenomena in a majority of active learning scenarios
Cite
Text
Chen et al. "Debiased Active Learning with Variational Gradient Rectifier." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I15.33744Markdown
[Chen et al. "Debiased Active Learning with Variational Gradient Rectifier." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/chen2025aaai-debiased/) doi:10.1609/AAAI.V39I15.33744BibTeX
@inproceedings{chen2025aaai-debiased,
title = {{Debiased Active Learning with Variational Gradient Rectifier}},
author = {Chen, Weiguo and Wang, Changjian and Li, Shijun and Xu, Kele and Bai, Yanru and Chen, Wei and Li, Shanshan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {15884-15894},
doi = {10.1609/AAAI.V39I15.33744},
url = {https://mlanthology.org/aaai/2025/chen2025aaai-debiased/}
}