Differential Networks for Visual Question Answering

Abstract

The task of Visual Question Answering (VQA) has emerged in recent years for its potential applications. To address the VQA task, the model should fuse feature elements from both images and questions efficiently. Existing models fuse image feature element vi and question feature element qi directly, such as an element product viqi. Those solutions largely ignore the following two key points: 1) Whether vi and qi are in the same space. 2) How to reduce the observation noises in vi and qi. We argue that two differences between those two feature elements themselves, like (vi − vj) and (qi −qj), are more probably in the same space. And the difference operation would be beneficial to reduce observation noise. To achieve this, we first propose Differential Networks (DN), a novel plug-and-play module which enables differences between pair-wise feature elements. With the tool of DN, we then propose DN based Fusion (DF), a novel model for VQA task. We achieve state-of-the-art results on four publicly available datasets. Ablation studies also show the effectiveness of difference operations in DF model.

Cite

Text

Wu et al. "Differential Networks for Visual Question Answering." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33018997

Markdown

[Wu et al. "Differential Networks for Visual Question Answering." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/wu2019aaai-differential/) doi:10.1609/AAAI.V33I01.33018997

BibTeX

@inproceedings{wu2019aaai-differential,
  title     = {{Differential Networks for Visual Question Answering}},
  author    = {Wu, Chenfei and Liu, Jinlai and Wang, Xiaojie and Li, Ruifan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {8997-9004},
  doi       = {10.1609/AAAI.V33I01.33018997},
  url       = {https://mlanthology.org/aaai/2019/wu2019aaai-differential/}
}