Faster Meta Update Strategy for Noise-Robust Deep Learning

Abstract

It has been shown that deep neural networks are prone to overfitting on biased training data. Towards addressing this issue, meta-learning employs a meta model for correcting the training bias. Despite the promising performances, super slow training is currently the bottleneck in the meta learning approaches. In this paper, we introduce a novel Faster Meta Update Strategy (FaMUS) to replace the most expensive step in the meta gradient computation with a faster layer-wise approximation. We empirically find that FaMUS yields not only a reasonably accurate but also a low-variance approximation of the meta gradient. We conduct extensive experiments to verify the proposed method on two tasks. We show our method is able to save two-thirds of the training time while still maintaining the comparable or achieving even better generalization performance. In particular, our method achieves the state-of-the-art performance on both synthetic and realistic noisy labels, and obtains promising performance on long-tailed recognition on standard benchmarks. Code are released at https://github.com/youjiangxu/FaMUS.

Cite

Text

Xu et al. "Faster Meta Update Strategy for Noise-Robust Deep Learning." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00021

Markdown

[Xu et al. "Faster Meta Update Strategy for Noise-Robust Deep Learning." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/xu2021cvpr-faster/) doi:10.1109/CVPR46437.2021.00021

BibTeX

@inproceedings{xu2021cvpr-faster,
  title     = {{Faster Meta Update Strategy for Noise-Robust Deep Learning}},
  author    = {Xu, Youjiang and Zhu, Linchao and Jiang, Lu and Yang, Yi},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {144-153},
  doi       = {10.1109/CVPR46437.2021.00021},
  url       = {https://mlanthology.org/cvpr/2021/xu2021cvpr-faster/}
}