Assessing Tracking Performance in Complex Scenarios Using Mean Time Between Failures
Abstract
Existing measures for evaluating the performance of tracking algorithms are difficult to interpret, which makes it hard to identify the best approach for a particular situation. As we show, a dummy algorithm which does not actually track scores well under most existing measures. Although some measures characterize specific error sources quite well, combining them into a single aggregate measure for comparing approaches or tuning parameters is not straightforward. In this work we propose `mean time between failures' as a viable summary of solution quality - especially when the goal is to follow objects for as long as possible. In addition to being sensitive to all tracking errors, the performance numbers are directly interpretable: how long can an algorithm operate before a mistake has likely occurred (the object is lost, its identity is confused, etc.)? We illustrate the merits of this measure by assessing solutions from different algorithms on a challenging dataset.
Cite
Text
Carr and Collins. "Assessing Tracking Performance in Complex Scenarios Using Mean Time Between Failures." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477617Markdown
[Carr and Collins. "Assessing Tracking Performance in Complex Scenarios Using Mean Time Between Failures." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/carr2016wacv-assessing/) doi:10.1109/WACV.2016.7477617BibTeX
@inproceedings{carr2016wacv-assessing,
title = {{Assessing Tracking Performance in Complex Scenarios Using Mean Time Between Failures}},
author = {Carr, Peter and Collins, Robert T.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2016},
pages = {1-10},
doi = {10.1109/WACV.2016.7477617},
url = {https://mlanthology.org/wacv/2016/carr2016wacv-assessing/}
}