Learning Placeholders for Open-Set Recognition

Abstract

Traditional classifiers are deployed under closed-set setting, with both training and test classes belong to the same set. However, real-world applications probably face the input of unknown categories, and the model will recognize them as known ones. Under such circumstances, open-set recognition is proposed to maintain classification performance on known classes and reject unknowns. The closed-set models make overconfident predictions over familiar known class instances, so that calibration and thresholding across categories become essential issues when extending to an open-set environment. To this end, we proposed to learn PlaceholdeRs for Open-SEt Recognition (Proser), which prepares for the unknown classes by allocating placeholders for both data and classifier. In detail, learning data placeholders tries to anticipate open-set class data, thus transforms closed-set training into open-set training. Besides, to learn the invariant information between target and non-target classes, we reserve classifier placeholders as the class-specific boundary between known and unknown. The proposed Proser efficiently generates novel class by manifold mixup, and adaptively sets the value of reserved open-set classifier during training. Experiments on various datasets validate the effectiveness of our proposed method.

Cite

Text

Zhou et al. "Learning Placeholders for Open-Set Recognition." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00438

Markdown

[Zhou et al. "Learning Placeholders for Open-Set Recognition." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/zhou2021cvpr-learning/) doi:10.1109/CVPR46437.2021.00438

BibTeX

@inproceedings{zhou2021cvpr-learning,
  title     = {{Learning Placeholders for Open-Set Recognition}},
  author    = {Zhou, Da-Wei and Ye, Han-Jia and Zhan, De-Chuan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {4401-4410},
  doi       = {10.1109/CVPR46437.2021.00438},
  url       = {https://mlanthology.org/cvpr/2021/zhou2021cvpr-learning/}
}