Context-Guided Super-Class Inference for Zero-Shot Detection
Abstract
Zero-shot object detection (ZSD) is a newly proposed re-search problem, which aims to simultaneously locate and recognize objects of previously unseen classes. Existing algorithms usually formulate it as a simple combination of a typical detection framework and zero-shot classifier, by learning a visual-semantic mapping from the visual features of bounding box proposals to semantic embeddings of class labels. In this paper, we propose a novel ZSD approach that leverages the context information surrounding objects in the image, following the principle that objects tend to be found in certain contexts. It also incorporates the semantic relations between seen and unseen classes to help recognize located instances. Comprehensive experiments on PASCAL VOC and MS COCO datasets show that context and class hierarchy truly improve the performance of detection.
Cite
Text
Li et al. "Context-Guided Super-Class Inference for Zero-Shot Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00480Markdown
[Li et al. "Context-Guided Super-Class Inference for Zero-Shot Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/li2020cvprw-contextguided/) doi:10.1109/CVPRW50498.2020.00480BibTeX
@inproceedings{li2020cvprw-contextguided,
title = {{Context-Guided Super-Class Inference for Zero-Shot Detection}},
author = {Li, Yanan and Shao, Yilan and Wang, Donghui},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {4064-4068},
doi = {10.1109/CVPRW50498.2020.00480},
url = {https://mlanthology.org/cvprw/2020/li2020cvprw-contextguided/}
}