Learning Bounds for Open-Set Learning
Abstract
Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and re_x0002_alistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorith_x0002_mic perspectives, there are few methods that pro_x0002_vide generalization guarantees on their ability to achieve consistent performance on different train_x0002_ing samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its general_x0002_ization error-given training samples with size n, the estimation error will get close to order Op(1/$\sqrt{}$n). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the tar_x0002_get classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/AnjinLiu/Openset_Learning_AOSR.
Cite
Text
Fang et al. "Learning Bounds for Open-Set Learning." International Conference on Machine Learning, 2021.Markdown
[Fang et al. "Learning Bounds for Open-Set Learning." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/fang2021icml-learning/)BibTeX
@inproceedings{fang2021icml-learning,
title = {{Learning Bounds for Open-Set Learning}},
author = {Fang, Zhen and Lu, Jie and Liu, Anjin and Liu, Feng and Zhang, Guangquan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {3122-3132},
volume = {139},
url = {https://mlanthology.org/icml/2021/fang2021icml-learning/}
}