Object Detection from Video Tubelets with Convolutional Neural Networks
Abstract
Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (R-CNN)). The lately introduced ImageNet task on object detection from video (VID) brings the object detection task into video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task.
Cite
Text
Kang et al. "Object Detection from Video Tubelets with Convolutional Neural Networks." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.95Markdown
[Kang et al. "Object Detection from Video Tubelets with Convolutional Neural Networks." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/kang2016cvpr-object/) doi:10.1109/CVPR.2016.95BibTeX
@inproceedings{kang2016cvpr-object,
title = {{Object Detection from Video Tubelets with Convolutional Neural Networks}},
author = {Kang, Kai and Ouyang, Wanli and Li, Hongsheng and Wang, Xiaogang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.95},
url = {https://mlanthology.org/cvpr/2016/kang2016cvpr-object/}
}