Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
Abstract
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of [??] for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of Vision Longformer, which is a variant of Longformer [??], originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work [??], on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code are released at https://github.com/microsoft/vision-longformer.
Cite
Text
Zhang et al. "Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00299Markdown
[Zhang et al. "Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zhang2021iccv-multiscale/) doi:10.1109/ICCV48922.2021.00299BibTeX
@inproceedings{zhang2021iccv-multiscale,
title = {{Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding}},
author = {Zhang, Pengchuan and Dai, Xiyang and Yang, Jianwei and Xiao, Bin and Yuan, Lu and Zhang, Lei and Gao, Jianfeng},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {2998-3008},
doi = {10.1109/ICCV48922.2021.00299},
url = {https://mlanthology.org/iccv/2021/zhang2021iccv-multiscale/}
}