RALF: A Reinforced Active Learning Formulation for Object Class Recognition
Abstract
Active learning aims to reduce the amount of labels required for classification. The main difficulty is to find a good trade-off between exploration and exploitation of the labeling process that depends - among other things - on the classification task, the distribution of the data and the employed classification scheme. In this paper, we analyze different sampling criteria including a novel density-based criteria and demonstrate the importance to combine exploration and exploitation sampling criteria. We also show that a time-varying combination of sampling criteria often improves performance. Finally, by formulating the criteria selection as a Markov decision process, we propose a novel feedback-driven framework based on reinforcement learning. Our method does not require prior information on the dataset or the sampling criteria but rather is able to adapt the sampling strategy during the learning process by experience. We evaluate our approach on three challenging object recognition datasets and show superior performance to previous active learning methods.
Cite
Text
Ebert et al. "RALF: A Reinforced Active Learning Formulation for Object Class Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6248108Markdown
[Ebert et al. "RALF: A Reinforced Active Learning Formulation for Object Class Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/ebert2012cvpr-ralf/) doi:10.1109/CVPR.2012.6248108BibTeX
@inproceedings{ebert2012cvpr-ralf,
title = {{RALF: A Reinforced Active Learning Formulation for Object Class Recognition}},
author = {Ebert, Sandra and Fritz, Mario and Schiele, Bernt},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2012},
pages = {3626-3633},
doi = {10.1109/CVPR.2012.6248108},
url = {https://mlanthology.org/cvpr/2012/ebert2012cvpr-ralf/}
}