MulT: An End-to-End Multitask Learning Transformer

Abstract

We propose an end-to-end Multitask Learning Transformer framework, named MulT, to simultaneously learn multiple high-level vision tasks, including depth estimation, semantic segmentation, reshading, surface normal estimation, 2D keypoint detection, and edge detection. Based on the Swin transformer model, our framework encodes the input image into a shared representation and makes predictions for each vision task using task-specific transformer-based decoder heads. At the heart of our approach is a shared attention mechanism modeling the dependencies across the tasks. We evaluate our model on several multitask benchmarks, showing that our MulT framework outperforms both the state-of-the art multitask convolutional neural network models and all the respective single task transformer models. Our experiments further highlight the benefits of sharing attention across all the tasks, and demonstrate that our MulT model is robust and generalizes well to new domains. We will make our code and models publicly available upon publication.

Cite

Text

Bhattacharjee et al. "MulT: An End-to-End Multitask Learning Transformer." Conference on Computer Vision and Pattern Recognition, 2022.

Markdown

[Bhattacharjee et al. "MulT: An End-to-End Multitask Learning Transformer." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/bhattacharjee2022cvpr-mult/)

BibTeX

@inproceedings{bhattacharjee2022cvpr-mult,
  title     = {{MulT: An End-to-End Multitask Learning Transformer}},
  author    = {Bhattacharjee, Deblina and Zhang, Tong and Süsstrunk, Sabine and Salzmann, Mathieu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {12031-12041},
  url       = {https://mlanthology.org/cvpr/2022/bhattacharjee2022cvpr-mult/}
}