AugDETR: Improving Multi-Scale Learning for Detection Transformer

Abstract

Current end-to-end detectors typically exploit transformers to detect objects and show promising performance. Among them, Deformable DETR is a representative paradigm that effectively exploits multi-scale features. However, small local receptive fields and limited query-encoder interactions weaken multi-scale learning. In this paper, we analyze local feature enhancement and multi-level encoder exploitation for improved multi-scale learning and construct a novel detection transformer detector named Augmented DETR (AugDETR) to realize them. Specifically, AugDETR consists of two components: Hybrid Attention Encoder and Encoder-Mixing Cross-Attention. Hybrid Attention Encoder enlarges the receptive field of the deformable encoder and introduces global context features to enhance feature representation. Encoder-Mixing Cross-Attention adaptively leverages multi-level encoders based on query features for more discriminative object features and faster convergence. By combining AugDETR with DETR-based detectors such as DINO, AlignDETR, DDQ, our models achieve performance improvements of 1.2, 1.1, and 1.0 AP in the COCO under the ResNet-50-4scale and 12 epochs setting, respectively.

Cite

Text

Dong et al. "AugDETR: Improving Multi-Scale Learning for Detection Transformer." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72691-0_14

Markdown

[Dong et al. "AugDETR: Improving Multi-Scale Learning for Detection Transformer." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/dong2024eccv-augdetr/) doi:10.1007/978-3-031-72691-0_14

BibTeX

@inproceedings{dong2024eccv-augdetr,
  title     = {{AugDETR: Improving Multi-Scale Learning for Detection Transformer}},
  author    = {Dong, Jinpeng and Lin, Yutong and Li, Chen and Zhou, Sanping and Zheng, Nanning},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72691-0_14},
  url       = {https://mlanthology.org/eccv/2024/dong2024eccv-augdetr/}
}