Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning

Abstract

Deep learning has made tremendous success in computer vision, natural language processing and even visual-semantic learning, which requires a huge amount of labeled training data. Nevertheless, the goal of human-level intelligence is to enable a model to quickly obtain an in-depth understanding given a small number of samples, especially with heterogeneity in the multi-modal scenarios such as visual question answering and image captioning. In this paper, we study the few-shot visual-semantic learning and present the Hierarchical Graph ATtention network (HGAT). This two-stage network models the intra- and inter-modal relationships with limited image-text samples. The main contributions of HGAT can be summarized as follows: 1) it sheds light on tackling few-shot multi-modal learning problems, which focuses primarily, but not exclusively on visual and semantic modalities, through better exploitation of the intra-relationship of each modality and an attention-based co-learning framework between modalities using a hierarchical graph-based architecture; 2) it achieves superior performance on both visual question answering and image captioning in the few-shot setting; 3) it can be easily extended to the semi-supervised setting where image-text samples are partially unlabeled. We show via extensive experiments that HGAT delivers state-of-the-art performance on three widely-used benchmarks of two visual-semantic learning tasks.

Cite

Text

Yin et al. "Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00218

Markdown

[Yin et al. "Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/yin2021iccv-hierarchical/) doi:10.1109/ICCV48922.2021.00218

BibTeX

@inproceedings{yin2021iccv-hierarchical,
  title     = {{Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning}},
  author    = {Yin, Chengxiang and Wu, Kun and Che, Zhengping and Jiang, Bo and Xu, Zhiyuan and Tang, Jian},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {2177-2186},
  doi       = {10.1109/ICCV48922.2021.00218},
  url       = {https://mlanthology.org/iccv/2021/yin2021iccv-hierarchical/}
}