VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results

Abstract

Single-object tracking, also known as visual tracking, on the drone platform attracts much attention recently with various applications in computer vision, such as filming and surveillance. However, the lack of commonly accepted annotated datasets and standard evaluation platform prevent the developments of algorithms. To address this issue, the Vision Meets Drone Single-Object Tracking (VisDrone-SOT2018) Challenge workshop was organized in conjunction with the 15th European Conference on Computer Vision (ECCV 2018) to track and advance the technologies in such field. Specifically, we collect a dataset, including 132 video sequences divided into three non-overlapping sets, i.e. , training (86 sequences with 69, 941 frames), validation (11 sequences with 7, 046 frames), and testing (35 sequences with 29, 367 frames) sets. We provide fully annotated bounding boxes of the targets as well as several useful attributes, e.g. , occlusion, background clutter, and camera motion. The tracking targets in these sequences include pedestrians, cars, buses, and animals. The dataset is extremely challenging due to various factors, such as occlusion, large scale, pose variation, and fast motion. We present the evaluation protocol of the VisDrone-SOT2018 challenge and the results of a comparison of 22 trackers on the benchmark dataset, which are publicly available on the challenge website: http://www.aiskyeye.com/ . We hope this challenge largely boosts the research and development in single object tracking on drone platforms.

Cite

Text

Wen et al. "VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11021-5_28

Markdown

[Wen et al. "VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/wen2018eccvw-visdronesot2018/) doi:10.1007/978-3-030-11021-5_28

BibTeX

@inproceedings{wen2018eccvw-visdronesot2018,
  title     = {{VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results}},
  author    = {Wen, Longyin and Zhu, Pengfei and Du, Dawei and Bian, Xiao and Ling, Haibin and Hu, Qinghua and Liu, Chenfeng and Cheng, Hao and Liu, Xiaoyu and Ma, Wenya and Nie, Qinqin and Wu, Haotian and Wang, Lianjie and Perera, Asanka G. and Zhang, Baochang and Heo, Byeongho and Liu, Chunlei and Li, Dongdong and Michail, Emmanouil and Chen, Hanlin and Liu, Hao and Li, Haojie and Kompatsiaris, Ioannis and Cheng, Jian and Fan, Jiaqing and Zhang, Jie and Choi, Jin Young and Li, Jing and Yang, Jinyu and Choi, Jongwon and Zhao, Juanping and Han, Jungong and Zhang, Kaihua and Duan, Kaiwen and Song, Ke and Avgerinakis, Konstantinos and Lee, Kyuewang and Ding, Lu and Lauer, Martin and Giannakeris, Panagiotis and Zhang, Peizhen and Wang, Qiang and Xu, Qianqian and Huang, Qingming and Liu, Qingshan and Laganière, Robert and Zhang, Ruixin and Yun, Sangdoo and Zhu, Shengyin and Wu, Sihang and Vrochidis, Stefanos and Tian, Wei and Zhang, Wei and Chen, Weidong and Hu, Weiming and Wang, Wenhao and Zhang, Wenhua and Ding, Wenrui and He, Xiaohao and Li, Xiaotong and Zhang, Xin and Luo, Xinbin and Hu, Xixi and Meng, Yang and Kuai, Yangliu and Zhao, Yanyun and Li, Yaxuan and Yang, Yifan and Zhang, Yifan and Wang, Yong and Qi, Yuankai and Deng, Zhipeng and He, Zhiqun},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2018},
  pages     = {469-495},
  doi       = {10.1007/978-3-030-11021-5_28},
  url       = {https://mlanthology.org/eccvw/2018/wen2018eccvw-visdronesot2018/}
}