Why Fine-Grained Labels in Pretraining Benefit Generalization?
Abstract
Recent studies show that pretraining a deep neural network with fine-grained labeled data, followed by fine-tuning on coarse-labeled data for downstream tasks, often yields better generalization than pretraining with coarse-labeled data. While there is ample empirical evidence supporting this, the theoretical justification remains an open problem. This paper addresses this gap by introducing a "hierarchical multi-view" structure to confine the input data distribution. Under this framework, we prove that: 1) coarse-grained pretraining only allows a neural network to learn the common features well, while 2) fine-grained pretraining helps the network learn the rare features in addition to the common ones, leading to improved accuracy on hard downstream test samples.
Cite
Text
Hong et al. "Why Fine-Grained Labels in Pretraining Benefit Generalization?." Transactions on Machine Learning Research, 2024.Markdown
[Hong et al. "Why Fine-Grained Labels in Pretraining Benefit Generalization?." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/hong2024tmlr-finegrained/)BibTeX
@article{hong2024tmlr-finegrained,
title = {{Why Fine-Grained Labels in Pretraining Benefit Generalization?}},
author = {Hong, Guan Zhe and Cui, Yin and Fuxman, Ariel and Chan, Stanley H. and Luo, Enming},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/hong2024tmlr-finegrained/}
}