M2SGD: Learning to Learn Important Weights

Abstract

Meta-learning concerns rapid knowledge acquisition. One popular approach cast optimisation as a learning problem and it has been shown that learnt neural optimisers updated base learners more quickly than their handcrafted counterparts. In this paper, we learn an optimisation rule that sparsely updates the learner parameters and removes redundant weights. We present Masked Meta-SGD (M2SGD), a neural optimiser which is not only capable of updating learners quickly, but also capable of removing 83.71% weights for ResNet20s.We release our codes at https://github.com/Nic5472K/CLVISION2020_CVPR_M2SGD.

Cite

Text

Kuo et al. "M2SGD: Learning to Learn Important Weights." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00126

Markdown

[Kuo et al. "M2SGD: Learning to Learn Important Weights." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/kuo2020cvprw-m2sgd/) doi:10.1109/CVPRW50498.2020.00126

BibTeX

@inproceedings{kuo2020cvprw-m2sgd,
  title     = {{M2SGD: Learning to Learn Important Weights}},
  author    = {Kuo, Nicholas I-Hsien and Harandi, Mehrtash and Fourrier, Nicolas and Walder, Christian and Ferraro, Gabriela and Suominen, Hanna},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {957-964},
  doi       = {10.1109/CVPRW50498.2020.00126},
  url       = {https://mlanthology.org/cvprw/2020/kuo2020cvprw-m2sgd/}
}