One Step Learning, One Step Review

Abstract

Visual fine-tuning has garnered significant attention with the rise of pre-trained vision models. The current prevailing method, full fine-tuning, suffers from the issue of knowledge forgetting as it focuses solely on fitting the downstream training set. In this paper, we propose a novel weight rollback-based fine-tuning method called OLOR (One step Learning, One step Review). OLOR combines fine-tuning with optimizers, incorporating a weight rollback term into the weight update term at each step. This ensures consistency in the weight range of upstream and downstream models, effectively mitigating knowledge forgetting and enhancing fine-tuning performance. In addition, a layer-wise penalty is presented to employ penalty decay and the diversified decay rate to adjust the weight rollback levels of layers for adapting varying downstream tasks. Through extensive experiments on various tasks such as image classification, object detection, semantic segmentation, and instance segmentation, we demonstrate the general applicability and state-of-the-art performance of our proposed OLOR. Code is available at https://github.com/rainbow-xiao/OLOR-AAAI-2024.

Cite

Text

Huang et al. "One Step Learning, One Step Review." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I11.29159

Markdown

[Huang et al. "One Step Learning, One Step Review." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/huang2024aaai-one/) doi:10.1609/AAAI.V38I11.29159

BibTeX

@inproceedings{huang2024aaai-one,
  title     = {{One Step Learning, One Step Review}},
  author    = {Huang, Xiaolong and Li, Qiankun and Li, Xueran and Gao, Xuesong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {12644-12652},
  doi       = {10.1609/AAAI.V38I11.29159},
  url       = {https://mlanthology.org/aaai/2024/huang2024aaai-one/}
}