Bootstrapping SVM Active Learning by Incorporating Unlabelled Images for Image Retrieval
Abstract
The performance of image retrieval with SVM active learning is known to be poor when started with few labeled images only. In this paper, the problem is solved by incorporating the unlabelled images into the bootstrapping of the learning process. In this work, the initial SVM classifier is trained with the few labeled images and the unlabelled images randomly selected from the image database. Both theoretical analysis and experimental results show that by incorporating unlabelled images in the bootstrapping, the efficiency of SVM active learning can be improved, and thus improves the overall retrieval performance.
Cite
Text
Wang et al. "Bootstrapping SVM Active Learning by Incorporating Unlabelled Images for Image Retrieval." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003. doi:10.1109/CVPR.2003.1211412Markdown
[Wang et al. "Bootstrapping SVM Active Learning by Incorporating Unlabelled Images for Image Retrieval." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2003.](https://mlanthology.org/cvpr/2003/wang2003cvpr-bootstrapping/) doi:10.1109/CVPR.2003.1211412BibTeX
@inproceedings{wang2003cvpr-bootstrapping,
title = {{Bootstrapping SVM Active Learning by Incorporating Unlabelled Images for Image Retrieval}},
author = {Wang, Lei and Chan, Kap Luk and Zhang, Zhihua},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2003},
pages = {629-634},
doi = {10.1109/CVPR.2003.1211412},
url = {https://mlanthology.org/cvpr/2003/wang2003cvpr-bootstrapping/}
}