Consistency Aware Robust Learning Under Noisy Labels

Abstract

Deep neural networks (DNNs) often struggle with noisy supervision, a common challenge in real-world datasets where high-quality annotations are scarce. While DNNs tend to memorize noisy labels, the human brain excels at learning in noisy environments by modulating sensitivity to errors based on their magnitude and consistency. Inspired by this, we propose Consistency-Aware Robust Learning (CARoL), which maintains a memory of past predictions and errors to quantify consistency and guide the learning process. CARoL employs a principled mechanism to distinguish clean from noisy samples and modulates rate of adaptation based on prediction consistency. Furthermore, it integrates multiple learning pathways to fully utilize the dataset, adapting to sample characteristics as training progresses. Our empirical evaluation shows that CARoL achieves high precision in noisy label detection, enhances robustness, and performs reliably under severe noise, highlighting the potential of biologically inspired approaches for robust learning.

Cite

Text

Sarfraz et al. "Consistency Aware Robust Learning Under Noisy Labels." Transactions on Machine Learning Research, 2025.

Markdown

[Sarfraz et al. "Consistency Aware Robust Learning Under Noisy Labels." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/sarfraz2025tmlr-consistency/)

BibTeX

@article{sarfraz2025tmlr-consistency,
  title     = {{Consistency Aware Robust Learning Under Noisy Labels}},
  author    = {Sarfraz, Fahad and Zonooz, Bahram and Arani, Elahe},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/sarfraz2025tmlr-consistency/}
}