CNN-RNN: A Unified Framework for Multi-Label Image Classification

Abstract

While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that most real world images contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in an unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification models.

Cite

Text

Wang et al. "CNN-RNN: A Unified Framework for Multi-Label Image Classification." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.251

Markdown

[Wang et al. "CNN-RNN: A Unified Framework for Multi-Label Image Classification." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/wang2016cvpr-cnnrnn/) doi:10.1109/CVPR.2016.251

BibTeX

@inproceedings{wang2016cvpr-cnnrnn,
  title     = {{CNN-RNN: A Unified Framework for Multi-Label Image Classification}},
  author    = {Wang, Jiang and Yang, Yi and Mao, Junhua and Huang, Zhiheng and Huang, Chang and Xu, Wei},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.251},
  url       = {https://mlanthology.org/cvpr/2016/wang2016cvpr-cnnrnn/}
}