Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation

Abstract

Vision Transformers have proven their versatility and utility for complex computer vision tasks, such as land cover segmentation in remote sensing applications. While performing on par or even outperforming other methods like Convolutional Neural Networks (CNNs), Transformers tend to require even larger datasets with fine-grained annotations (e.g., pixel-level labels for land cover segmentation). To overcome this limitation, we propose a weakly-supervised vision Transformer that leverages image-level labels to learn a semantic segmentation task to reduce the human annotation load. We achieve this by slightly modifying the architecture of the vision Transformer through the use of gating units in each attention head to enforce sparsity during training and thereby retaining only the most meaningful heads. This allows us to directly infer pixel-level labels from image-level labels by post-processing the un-pruned attention heads of the model and refining our predictions by iteratively training a segmentation model with high fidelity. Training and evaluation on the DFC2020 dataset show that our method1 not only generates high-quality segmentation masks using image-level labels, but also performs on par with fully-supervised training relying on pixel-level labels. Finally, our results show that our method is able to perform weakly-supervised semantic segmentation even on small-scale datasets.

Cite

Text

Hanna et al. "Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00208

Markdown

[Hanna et al. "Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/hanna2023cvprw-sparse/) doi:10.1109/CVPRW59228.2023.00208

BibTeX

@inproceedings{hanna2023cvprw-sparse,
  title     = {{Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation}},
  author    = {Hanna, Joëlle and Mommert, Michael and Borth, Damian},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {2145-2154},
  doi       = {10.1109/CVPRW59228.2023.00208},
  url       = {https://mlanthology.org/cvprw/2023/hanna2023cvprw-sparse/}
}