Impact of Label Noise on Learning Complex Features
Abstract
Neural networks trained with stochastic gradient descent exhibit an inductive bias towards simpler decision boundaries, typically converging to a narrow family of functions, and often fail to capture more complex features. This phenomenon raises concerns about the capacity of deep models to adequately learn and represent real-world datasets. Traditional approaches such as explicit regularization, data augmentation, architectural modifications, etc., have largely proven ineffective in encouraging the models to learn diverse features. In this work, we investigate the impact of pre-training models with noisy labels on the dynamics of SGD across various architectures and datasets. We show that pretraining promotes learning complex functions and diverse features in the presence of noise. Our experiments demonstrate that pre-training with noisy labels encourages gradient descent to find alternate minima that do not solely depend upon simple features, rather learns more complex and broader set of features, without hurting performance.
Cite
Text
Vashisht et al. "Impact of Label Noise on Learning Complex Features." NeurIPS 2024 Workshops: SciForDL, 2024.Markdown
[Vashisht et al. "Impact of Label Noise on Learning Complex Features." NeurIPS 2024 Workshops: SciForDL, 2024.](https://mlanthology.org/neuripsw/2024/vashisht2024neuripsw-impact/)BibTeX
@inproceedings{vashisht2024neuripsw-impact,
title = {{Impact of Label Noise on Learning Complex Features}},
author = {Vashisht, Rahul and Kumar, P Krishna and Govind, Harsha Vardhan and Ramaswamy, Harish Guruprasad},
booktitle = {NeurIPS 2024 Workshops: SciForDL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/vashisht2024neuripsw-impact/}
}