Dyadic Transfer Learning for Cross-Domain Image Classification

Abstract

Because manual image annotation is both expensive and labor intensive, in practice we often do not have sufficient labeled images to train an effective classifier for the new image classification tasks. Although multiple labeled image data sets are publicly available for a number of computer vision tasks, a simple mixture of them cannot achieve good performance due to the heterogeneous properties and structures between different data sets. In this paper, we propose a novel nonnegative matrix tri-factorization based transfer learning framework, called as Dyadic Knowledge Transfer (DKT) approach, to transfer cross-domain image knowledge for the new computer vision tasks, such as classifications. An efficient iterative algorithm to solve the proposed optimization problem is introduced. We perform the proposed approach on two benchmark image data sets to simulate the real world cross-domain image classification tasks. Promising experimental results demonstrate the effectiveness of the proposed approach.

Cite

Text

Wang et al. "Dyadic Transfer Learning for Cross-Domain Image Classification." IEEE/CVF International Conference on Computer Vision, 2011. doi:10.1109/ICCV.2011.6126287

Markdown

[Wang et al. "Dyadic Transfer Learning for Cross-Domain Image Classification." IEEE/CVF International Conference on Computer Vision, 2011.](https://mlanthology.org/iccv/2011/wang2011iccv-dyadic/) doi:10.1109/ICCV.2011.6126287

BibTeX

@inproceedings{wang2011iccv-dyadic,
  title     = {{Dyadic Transfer Learning for Cross-Domain Image Classification}},
  author    = {Wang, Hua and Nie, Feiping and Huang, Heng and Ding, Chris H. Q.},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2011},
  pages     = {551-556},
  doi       = {10.1109/ICCV.2011.6126287},
  url       = {https://mlanthology.org/iccv/2011/wang2011iccv-dyadic/}
}