Combinatorial Inference Against Label Noise
Abstract
Label noise is one of the critical sources that degrade generalization performance of deep neural networks significantly. To handle the label noise issue in a principled way, we propose a unique classification framework of constructing multiple models in heterogeneous coarse-grained meta-class spaces and making joint inference of the trained models for the final predictions in the original (base) class space. Our approach reduces noise level by simply constructing meta-classes and improves accuracy via combinatorial inferences over multiple constituent classifiers. Since the proposed framework has distinct and complementary properties for the given problem, we can even incorporate additional off-the-shelf learning algorithms to improve accuracy further. We also introduce techniques to organize multiple heterogeneous meta-class sets using $k$-means clustering and identify a desirable subset leading to learn compact models. Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the state-of-the-art methods under various synthetic noise configurations and in a real-world noisy dataset.
Cite
Text
Seo et al. "Combinatorial Inference Against Label Noise." Neural Information Processing Systems, 2019.Markdown
[Seo et al. "Combinatorial Inference Against Label Noise." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/seo2019neurips-combinatorial/)BibTeX
@inproceedings{seo2019neurips-combinatorial,
title = {{Combinatorial Inference Against Label Noise}},
author = {Seo, Paul Hongsuck and Kim, Geeho and Han, Bohyung},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {1173-1183},
url = {https://mlanthology.org/neurips/2019/seo2019neurips-combinatorial/}
}