Multi-Class Learning Using Unlabeled Samples: Theory and Algorithm
Abstract
In this paper, we investigate the generalization performance of multi-class classification, for which we obtain a shaper error bound by using the notion of local Rademacher complexity and additional unlabeled samples, substantially improving the state-of-the-art bounds in existing multi-class learning methods. The statistical learning motivates us to devise an efficient multi-class learning framework with the local Rademacher complexity and Laplacian regularization. Coinciding with the theoretical analysis, experimental results demonstrate that the stated approach achieves better performance.
Cite
Text
Li et al. "Multi-Class Learning Using Unlabeled Samples: Theory and Algorithm." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/399Markdown
[Li et al. "Multi-Class Learning Using Unlabeled Samples: Theory and Algorithm." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/li2019ijcai-multi/) doi:10.24963/IJCAI.2019/399BibTeX
@inproceedings{li2019ijcai-multi,
title = {{Multi-Class Learning Using Unlabeled Samples: Theory and Algorithm}},
author = {Li, Jian and Liu, Yong and Yin, Rong and Wang, Weiping},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {2880-2886},
doi = {10.24963/IJCAI.2019/399},
url = {https://mlanthology.org/ijcai/2019/li2019ijcai-multi/}
}