Learning from Noisy Data with Robust Representation Learning
Abstract
Learning from noisy data has attracted much attention, where most methods focus on label noise. In this work, we propose a new learning framework which simultaneously addresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing methods, we combat noise by learning robust representation. Specifically, we embed images into a low-dimensional subspace, and regularize the geometric structure of the subspace with robust contrastive learning, which includes an unsupervised consistency loss and a supervised mixup prototypical loss. We also propose a new noise cleaning method which leverages the learned representation to enforce a smoothness constraint on neighboring samples. Experiments on multiple benchmarks demonstrate state-of-the-art performance of our method and robustness of the learned representation. Code is available at https://github.com/salesforce/RRL/.
Cite
Text
Li et al. "Learning from Noisy Data with Robust Representation Learning." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00935Markdown
[Li et al. "Learning from Noisy Data with Robust Representation Learning." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/li2021iccv-learning/) doi:10.1109/ICCV48922.2021.00935BibTeX
@inproceedings{li2021iccv-learning,
title = {{Learning from Noisy Data with Robust Representation Learning}},
author = {Li, Junnan and Xiong, Caiming and Hoi, Steven C.H.},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {9485-9494},
doi = {10.1109/ICCV48922.2021.00935},
url = {https://mlanthology.org/iccv/2021/li2021iccv-learning/}
}