General Compression Framework for Efficient Transformer Object Tracking

Abstract

Previous works have attempted to improve tracking efficiency through lightweight architecture design or knowledge distillation from teacher models to compact student trackers. However, these solutions often sacrifice accuracy for speed to a great extent, and also have the problems of complex training process and structural limitations. Thus, we propose a general model compression framework for efficient transformer object tracking, named CompressTracker, to reduce model size while preserving tracking accuracy. Our approach features a novel stage division strategy that segments the transformer layers of the teacher model into distinct stages to break the limitation of model structure. Additionally, we also design a unique replacement training technique that randomly substitutes specific stages in the student model with those from the teacher model, as opposed to training the student model in isolation. Replacement training enhances the student model's ability to replicate the teacher model's behavior and simplifies the training process. To further forcing student model to emulate teacher model, we incorporate prediction guidance and stage-wise feature mimicking to provide additional supervision during the teacher model's compression process. CompressTracker is structurally agnostic, making it compatible with any transformer architecture. We conduct a series of experiment to verify the effectiveness and generalizability of our CompressTracker. Our CompressTracker-SUTrack, compressed from SUTrack, retains about 99% performance on LaSOT (72.2% AUC) while achieves 2.42x speed up. Code is available at https://github.com/LingyiHongfd/CompressTracker.

Cite

Text

Hong et al. "General Compression Framework for Efficient Transformer Object Tracking." International Conference on Computer Vision, 2025.

Markdown

[Hong et al. "General Compression Framework for Efficient Transformer Object Tracking." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/hong2025iccv-general/)

BibTeX

@inproceedings{hong2025iccv-general,
  title     = {{General Compression Framework for Efficient Transformer Object Tracking}},
  author    = {Hong, Lingyi and Li, Jinglun and Zhou, Xinyu and Yan, Shilin and Guo, Pinxue and Jiang, Kaixun and Chen, Zhaoyu and Gao, Shuyong and Li, Runze and Sheng, Xingdong and Zhang, Wei and Lu, Hong and Zhang, Wenqiang},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {13427-13437},
  url       = {https://mlanthology.org/iccv/2025/hong2025iccv-general/}
}