Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment
Abstract
In active learning, the ignorance of aligning unlabeled samples' distribution with that of labeled samples hinders the model trained upon labeled samples from selecting informative unlabeled samples. In this paper, we propose an agreement-discrepancy-selection (ADS) approach, and target at unifying distribution alignment with sample selection by introducing adversarial classifiers to the convolutional neural network (CNN). Minimizing classifiers' prediction discrepancy (maximizing prediction agreement) drives learning CNN features to reduce the distribution bias of labeled and unlabeled samples, while maximizing classifiers' discrepancy highlights informative samples. Iterative optimization of agreement and discrepancy loss calibrated with an entropy function drives aligning sample distributions in a progressive fashion for effective active learning. Experiments on image classification and object detection tasks demonstrate that ADS is task-agnostic, while significantly outperforms the previous methods when the labeled sets are small.
Cite
Text
Fu et al. "Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16915Markdown
[Fu et al. "Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/fu2021aaai-agreement/) doi:10.1609/AAAI.V35I8.16915BibTeX
@inproceedings{fu2021aaai-agreement,
title = {{Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment}},
author = {Fu, Mengying and Yuan, Tianning and Wan, Fang and Xu, Songcen and Ye, Qixiang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {7466-7473},
doi = {10.1609/AAAI.V35I8.16915},
url = {https://mlanthology.org/aaai/2021/fu2021aaai-agreement/}
}