Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking
Abstract
Recently, there has been a significant amount of research on Multi-Camera People Tracking (MCPT). MCPT presents more challenges compared to Multi-Object Single Camera Tracking, leading many existing studies to address them using offline methods. However, offline methods can only analyze pre-recorded videos, which presents less practical application in real industries compared to online methods. Therefore, we aimed to focus on resolving major problems that arise when using the online approach. Specifically, to address problems that could critically affect the performance of the online MCPT, such as storing inaccurate or low-quality appearance features and situations where a person is assigned multiple IDs, we proposed a Cluster Self-Refinement module. We achieved a third-place at the 2024 AI City Challenge Track 1 with a HOTA score of 60.9261%, and our code is available at https://github.com/nota-github/AIC2024_Track1_Nota.
Cite
Text
Kim et al. "Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00714Markdown
[Kim et al. "Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/kim2024cvprw-cluster/) doi:10.1109/CVPRW63382.2024.00714BibTeX
@inproceedings{kim2024cvprw-cluster,
title = {{Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking}},
author = {Kim, Jeongho and Shin, Wooksu and Park, Hancheol and Choi, Donghyuk},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {7190-7197},
doi = {10.1109/CVPRW63382.2024.00714},
url = {https://mlanthology.org/cvprw/2024/kim2024cvprw-cluster/}
}