FlowBoost - Appearance Learning from Sparsely Annotated Video
Abstract
We propose a new learning method which exploits temporal consistency to successfully learn a complex appearance model from a sparsely labeled training video. Our approach consists in iteratively improving an appearance based model built with a Boosting procedure, and the reconstruction of trajectories corresponding to the motion of multiple targets. We demonstrate the efficiency of our procedure on pedestrian detection in videos and cell detection in microscopy image sequences. In both cases, our method is demonstrated to reduce the labeling requirement by one to two orders of magnitude. We show that in some instances, our method trained with sparse labels on a video sequence is able to outperform a standard learning procedure trained with the fully labeled sequence.
Cite
Text
Ali et al. "FlowBoost - Appearance Learning from Sparsely Annotated Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995403Markdown
[Ali et al. "FlowBoost - Appearance Learning from Sparsely Annotated Video." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/ali2011cvpr-flowboost/) doi:10.1109/CVPR.2011.5995403BibTeX
@inproceedings{ali2011cvpr-flowboost,
title = {{FlowBoost - Appearance Learning from Sparsely Annotated Video}},
author = {Ali, Karim and Hasler, David and Fleuret, François},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {1433-1440},
doi = {10.1109/CVPR.2011.5995403},
url = {https://mlanthology.org/cvpr/2011/ali2011cvpr-flowboost/}
}