Mutual Learning for Long-Tailed Recognition
Abstract
Deep neural networks perform well in artificially-balanced datasets, but real-world data often has a long-tailed distribution. Recent studies have focused on developing unbiased classifiers to improve tail class performance. Despite the efforts to learn a fine classifier, we cannot guarantee a solid performance if the representations are of poor quality. However, learning high-quality representations in a long-tailed setting is difficult because the features of tail classes easily overfit the training dataset. In this work, we propose a mutual learning framework that generates high-quality representations in long-tailed settings by exchanging information between networks. We show that the proposed method can improve representation quality and establish a new state-of-the-art record on several long-tailed recognition benchmark datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
Cite
Text
Park et al. "Mutual Learning for Long-Tailed Recognition." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Park et al. "Mutual Learning for Long-Tailed Recognition." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/park2023wacv-mutual/)BibTeX
@inproceedings{park2023wacv-mutual,
title = {{Mutual Learning for Long-Tailed Recognition}},
author = {Park, Changhwa and Yim, Junho and Jun, Eunji},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {2675-2684},
url = {https://mlanthology.org/wacv/2023/park2023wacv-mutual/}
}