Sketch Tokens: A Learned Mid-Level Representation for Contour and Object Detection
Abstract
We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both topdown and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA [5] and PASCAL [10], respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms.
Cite
Text
Lim et al. "Sketch Tokens: A Learned Mid-Level Representation for Contour and Object Detection." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.406Markdown
[Lim et al. "Sketch Tokens: A Learned Mid-Level Representation for Contour and Object Detection." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/lim2013cvpr-sketch/) doi:10.1109/CVPR.2013.406BibTeX
@inproceedings{lim2013cvpr-sketch,
title = {{Sketch Tokens: A Learned Mid-Level Representation for Contour and Object Detection}},
author = {Lim, Joseph J. and Zitnick, C. L. and Dollar, Piotr},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.406},
url = {https://mlanthology.org/cvpr/2013/lim2013cvpr-sketch/}
}