Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning

Abstract

Transfer Learning (TL) is a promising technique to improve the performance of a target task by transferring the knowledge of models trained on relevant source datasets. With the advent of advanced depth models, various methods of exploiting pre-trained depth models at a large scale have come into the limelight. However, for multi-label classification tasks, TL approaches suffer from performance degradation in correctly predicting multiple objects in an image with significant size differences. Since such a hard instance contains imperceptible objects, most pre-trained models lose their ability during downsampling. For the hard instance, this paper proposes a simple but effective classifier for multiple predictions by using the hidden representations from the fixed backbone. To this end, we mix the pre-logit with the intermediate representation with a learnable scale. We show that our method is effective as fine-tuning with few additional parameters and is particularly advantageous for hard instances.

Cite

Text

Eom et al. "Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning." NeurIPS 2022 Workshops: HITY, 2022.

Markdown

[Eom et al. "Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning." NeurIPS 2022 Workshops: HITY, 2022.](https://mlanthology.org/neuripsw/2022/eom2022neuripsw-layover/)

BibTeX

@inproceedings{eom2022neuripsw-layover,
  title     = {{Layover Intermediate Layer for Multi-Label Classification in Efficient Transfer Learning}},
  author    = {Eom, Seongha and Kim, Taehyeon and Yun, Se-Young},
  booktitle = {NeurIPS 2022 Workshops: HITY},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/eom2022neuripsw-layover/}
}