Learning Deep Bilinear Transformation for Fine-Grained Image Representation

Abstract

Bilinear feature transformation has shown the state-of-the-art performance in learning fine-grained image representations. However, the computational cost to learn pairwise interactions between deep feature channels is prohibitively expensive, which restricts this powerful transformation to be used in deep neural networks. In this paper, we propose a deep bilinear transformation (DBT) block, which can be deeply stacked in convolutional neural networks to learn fine-grained image representations. The DBT block can uniformly divide input channels into several semantic groups. As bilinear transformation can be represented by calculating pairwise interactions within each group, the computational cost can be heavily relieved. The output of each block is further obtained by aggregating intra-group bilinear features, with residuals from the entire input features. We found that the proposed network achieves new state-of-the-art in several fine-grained image recognition benchmarks, including CUB-Bird, Stanford-Car, and FGVC-Aircraft.

Cite

Text

Zheng et al. "Learning Deep Bilinear Transformation for Fine-Grained Image Representation." Neural Information Processing Systems, 2019.

Markdown

[Zheng et al. "Learning Deep Bilinear Transformation for Fine-Grained Image Representation." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/zheng2019neurips-learning/)

BibTeX

@inproceedings{zheng2019neurips-learning,
  title     = {{Learning Deep Bilinear Transformation for Fine-Grained Image Representation}},
  author    = {Zheng, Heliang and Fu, Jianlong and Zha, Zheng-Jun and Luo, Jiebo},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {4277-4286},
  url       = {https://mlanthology.org/neurips/2019/zheng2019neurips-learning/}
}