MoPro: Webly Supervised Learning with Momentum Prototypes

Abstract

We propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. Most existing works on webly-supervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. MoPro achieves state-of-the-art performance on WebVision, a weakly-labeled noisy dataset. MoPro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. It outperforms the ImageNet supervised pretrained model by +10.5 on 1-shot classification on VOC, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1% of ImageNet labeled samples. Furthermore, MoPro is more robust to distribution shifts. Code and pretrained models are available at https://github.com/salesforce/MoPro.

Cite

Text

Li et al. "MoPro: Webly Supervised Learning with Momentum Prototypes." International Conference on Learning Representations, 2021.

Markdown

[Li et al. "MoPro: Webly Supervised Learning with Momentum Prototypes." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/li2021iclr-mopro/)

BibTeX

@inproceedings{li2021iclr-mopro,
  title     = {{MoPro: Webly Supervised Learning with Momentum Prototypes}},
  author    = {Li, Junnan and Xiong, Caiming and Hoi, Steven},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/li2021iclr-mopro/}
}