AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-Resolution

Abstract

Efficient transformer-based models have made remarkable progress in image super-resolution (SR). Most of these works mainly design elaborate structures to accelerate the inference of the transformer, where all feature tokens are propagated equally. However, they ignore the underlying characteristic of image content, i.e., various image regions have distinct restoration difficulties, especially for large images (2K-8K), failing to achieve adaptive inference. In this work, we propose an adaptive token sparsification transformer (AdaFormer) to speed up the model inference for image SR. Specifically, a texture-relevant sparse attention block with parallel global and local branches is introduced, aiming to integrate informative tokens from the global view instead of only in fixed local windows. Then, an early-exit strategy is designed to progressively halt tokens according to the token importance. To estimate the plausibility of each token, we adopt a lightweight confidence estimator, which is constrained by an uncertainty-guided loss to obtain a binary halting mask about the tokens. Experiments on large images have illustrated that our proposal reduces nearly 90% latency against SwinIR on Test8K, while maintaining a comparable performance.

Cite

Text

Luo et al. "AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-Resolution." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I5.28194

Markdown

[Luo et al. "AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-Resolution." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/luo2024aaai-adaformer/) doi:10.1609/AAAI.V38I5.28194

BibTeX

@inproceedings{luo2024aaai-adaformer,
  title     = {{AdaFormer: Efficient Transformer with Adaptive Token Sparsification for Image Super-Resolution}},
  author    = {Luo, Xiaotong and Ai, Zekun and Liang, Qiuyuan and Liu, Ding and Xie, Yuan and Qu, Yanyun and Fu, Yun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {4009-4016},
  doi       = {10.1609/AAAI.V38I5.28194},
  url       = {https://mlanthology.org/aaai/2024/luo2024aaai-adaformer/}
}