MotionTrack: End-to-End Transformer-Based Multi-Object Tracking with LiDAR-Camera Fusion

Abstract

Multiple Object Tracking (MOT) is crucial to autonomous vehicle perception. End-to-end transformer-based algorithms, which detect and track objects simultaneously, show great potential for the MOT task. However, most existing methods focus on image-based tracking with a single object category. In this paper, we propose an end-to-end transformer-based MOT algorithm (MotionTrack) with multi-modality sensor inputs to track objects with multiple classes. Our objective is to establish a transformer baseline for the MOT in an autonomous driving environment. The proposed algorithm consists of a transformer-based data association (DA) module and a transformer-based query enhancement module to achieve MOT and Multiple Object Detection (MOD) simultaneously. The MotionTrack and its variations achieve better results (AMOTA score at 0.55) on the nuScenes dataset compared with other classical baseline models, such as the AB3DMOT, the CenterTrack, and the probabilistic 3D Kalman filter. In addition, we prove that a modified attention mechanism can be utilized for DA to accomplish the MOT, and aggregate history features to enhance the MOD performance.

Cite

Text

Zhang et al. "MotionTrack: End-to-End Transformer-Based Multi-Object Tracking with LiDAR-Camera Fusion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00020

Markdown

[Zhang et al. "MotionTrack: End-to-End Transformer-Based Multi-Object Tracking with LiDAR-Camera Fusion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/zhang2023cvprw-motiontrack/) doi:10.1109/CVPRW59228.2023.00020

BibTeX

@inproceedings{zhang2023cvprw-motiontrack,
  title     = {{MotionTrack: End-to-End Transformer-Based Multi-Object Tracking with LiDAR-Camera Fusion}},
  author    = {Zhang, Ce and Zhang, Chengjie and Guo, Yiluan and Chen, Lingji and Happold, Michael},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {151-160},
  doi       = {10.1109/CVPRW59228.2023.00020},
  url       = {https://mlanthology.org/cvprw/2023/zhang2023cvprw-motiontrack/}
}