LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

Abstract

Referring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image. A paradigm for tackling this problem is to leverage a powerful vision-language ("cross-modal") decoder to fuse features independently extracted from a vision encoder and a language encoder. Recent methods have made remarkable advancements in this paradigm by exploiting Transformers as cross-modal decoders, concurrent to the Transformer's overwhelming success in many other vision-language tasks. Adopting a different approach in this work, we show that significantly better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in intermediate layers of a vision Transformer encoder network. By conducting cross-modal feature fusion in the visual feature encoding stage, we can leverage the well-proven correlation modeling power of a Transformer encoder for excavating helpful multi-modal context. This way, accurate segmentation results are readily harvested with a light-weight mask predictor. Without bells and whistles, our method surpasses the previous state-of-the-art methods on RefCOCO, RefCOCO+, and G-Ref by large margins.

Cite

Text

Yang et al. "LAVT: Language-Aware Vision Transformer for Referring Image Segmentation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01762

Markdown

[Yang et al. "LAVT: Language-Aware Vision Transformer for Referring Image Segmentation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/yang2022cvpr-lavt/) doi:10.1109/CVPR52688.2022.01762

BibTeX

@inproceedings{yang2022cvpr-lavt,
  title     = {{LAVT: Language-Aware Vision Transformer for Referring Image Segmentation}},
  author    = {Yang, Zhao and Wang, Jiaqi and Tang, Yansong and Chen, Kai and Zhao, Hengshuang and Torr, Philip H.S.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {18155-18165},
  doi       = {10.1109/CVPR52688.2022.01762},
  url       = {https://mlanthology.org/cvpr/2022/yang2022cvpr-lavt/}
}