Exploiting Convolution Filter Patterns for Transfer Learning

Abstract

In this paper, we introduce a new regularization technique for transfer learning. The aim of the proposed approach is to capture statistical relationships among convolution filters learned from a well-trained network and transfer this knowledge to another network. Since convolution filters of the prevalent deep Convolutional Neural Network (CNN) models share a number of similar patterns, in order to speed up the learning procedure, we capture such correlations by Gaussian Mixture Models (GMMs) and transfer them using a regularization term. We have conducted extensive experiments on the CIFAR10, Places2, and CM-Places datasets to assess generalizability, task transferability, and cross-model transferability of the proposed approach, respectively. The experimental results show that the feature representations have efficiently been learned and transferred through the proposed statistical regularization scheme. Moreover, our method is an architecture independent approach, which is applicable for a variety of CNN architectures.

Cite

Text

Aygun et al. "Exploiting Convolution Filter Patterns for Transfer Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.309

Markdown

[Aygun et al. "Exploiting Convolution Filter Patterns for Transfer Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/aygun2017iccvw-exploiting/) doi:10.1109/ICCVW.2017.309

BibTeX

@inproceedings{aygun2017iccvw-exploiting,
  title     = {{Exploiting Convolution Filter Patterns for Transfer Learning}},
  author    = {Aygun, Mehmet and Aytar, Yusuf and Ekenel, Hazim Kemal},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {2674-2680},
  doi       = {10.1109/ICCVW.2017.309},
  url       = {https://mlanthology.org/iccvw/2017/aygun2017iccvw-exploiting/}
}