AutoLR: Layer-Wise Pruning and Auto-Tuning of Learning Rates in Fine-Tuning of Deep Networks

Abstract

Existing fine-tuning methods use a single learning rate over all layers. In this paper, first, we discuss that trends of layer-wise weight variations by fine-tuning using a single learning rate do not match the well-known notion that lower-level layers extract general features and higher-level layers extract specific features. Based on our discussion, we propose an algorithm that improves fine-tuning performance and reduces network complexity through layer-wise pruning and auto-tuning of layer-wise learning rates. The proposed algorithm has verified the effectiveness by achieving state-of-the-art performance on the image retrieval benchmark datasets (CUB-200, Cars-196, Stanford online product, and Inshop). Code is available at https://github.com/youngminPIL/AutoLR.

Cite

Text

Ro and Choi. "AutoLR: Layer-Wise Pruning and Auto-Tuning of Learning Rates in Fine-Tuning of Deep Networks." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I3.16350

Markdown

[Ro and Choi. "AutoLR: Layer-Wise Pruning and Auto-Tuning of Learning Rates in Fine-Tuning of Deep Networks." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/ro2021aaai-autolr/) doi:10.1609/AAAI.V35I3.16350

BibTeX

@inproceedings{ro2021aaai-autolr,
  title     = {{AutoLR: Layer-Wise Pruning and Auto-Tuning of Learning Rates in Fine-Tuning of Deep Networks}},
  author    = {Ro, Youngmin and Choi, Jin Young},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {2486-2494},
  doi       = {10.1609/AAAI.V35I3.16350},
  url       = {https://mlanthology.org/aaai/2021/ro2021aaai-autolr/}
}