Star with Bilinear Mapping

Abstract

Contextual modeling is crucial for robust visual representation learning, especially in computer vision. Although Transformers have become a leading architecture for vision tasks due to their attention mechanism, the quadratic complexity of full attention operations presents substantial computational challenges. To address this, we introduce Star with Bilinear Mapping (SBM), a Transformer-like architecture that achieves global contextual modeling with linear complexity. SBM employs a bilinear mapping module (BM) with low-rank decomposition strategy and star operations (element-wise multiplication) to efficiently capture global contextual information. Our model demonstrates competitive performance on image classification and semantic segmentation tasks, delivering significant computational efficiency gains compared to traditional attention-based models.

Cite

Text

Peng et al. "Star with Bilinear Mapping." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02355

Markdown

[Peng et al. "Star with Bilinear Mapping." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/peng2025cvpr-star/) doi:10.1109/CVPR52734.2025.02355

BibTeX

@inproceedings{peng2025cvpr-star,
  title     = {{Star with Bilinear Mapping}},
  author    = {Peng, Zelin and Huang, Yu and Xu, Zhengqin and Tang, Feilong and Hu, Ming and Yang, Xiaokang and Shen, Wei},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {25292-25302},
  doi       = {10.1109/CVPR52734.2025.02355},
  url       = {https://mlanthology.org/cvpr/2025/peng2025cvpr-star/}
}