Visual Reasoning with Multi-Hop Feature Modulation
Abstract
Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation significantly outperforms prior state-of-the-art on the GuessWhat?! visual dialogue task and matches state-of-the art on the ReferIt object retrieval task, and we provide additional qualitative analysis.
Cite
Text
Strub et al. "Visual Reasoning with Multi-Hop Feature Modulation." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01228-1_48Markdown
[Strub et al. "Visual Reasoning with Multi-Hop Feature Modulation." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/strub2018eccv-visual/) doi:10.1007/978-3-030-01228-1_48BibTeX
@inproceedings{strub2018eccv-visual,
title = {{Visual Reasoning with Multi-Hop Feature Modulation}},
author = {Strub, Florian and Seurin, Mathieu and Perez, Ethan and de Vries, Harm and Mary, Jeremie and Preux, Philippe and CourvilleOlivier Pietquin, Aaron},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01228-1_48},
url = {https://mlanthology.org/eccv/2018/strub2018eccv-visual/}
}