Has My Algorithm Succeeded? an Evaluator for Human Pose Estimators
Abstract
Most current vision algorithms deliver their output ‘as is’, without indicating whether it is correct or not. In this paper we propose evaluator algorithms that predict if a vision algorithm has succeeded. We illustrate this idea for the case of Human Pose Estimation (HPE). We describe the stages required to learn and test an evaluator, including the use of an annotated ground truth dataset for training and testing the evaluator (and we provide a new dataset for the HPE case), and the development of auxiliary features that have not been used by the (HPE) algorithm, but can be learnt by the evaluator to predict if the output is correct or not. Then an evaluator is built for each of four recently developed HPE algorithms using their publicly available implementations: Eichner and Ferrari [5], Sapp et al. [16], Andriluka et al. [2] and Yang and Ramanan [22]. We demonstrate that in each case the evaluator is able to predict if the algorithm has correctly estimated the pose or not.
Cite
Text
Jammalamadaka et al. "Has My Algorithm Succeeded? an Evaluator for Human Pose Estimators." European Conference on Computer Vision, 2012. doi:10.1007/978-3-642-33712-3_9Markdown
[Jammalamadaka et al. "Has My Algorithm Succeeded? an Evaluator for Human Pose Estimators." European Conference on Computer Vision, 2012.](https://mlanthology.org/eccv/2012/jammalamadaka2012eccv-my/) doi:10.1007/978-3-642-33712-3_9BibTeX
@inproceedings{jammalamadaka2012eccv-my,
title = {{Has My Algorithm Succeeded? an Evaluator for Human Pose Estimators}},
author = {Jammalamadaka, Nataraj and Zisserman, Andrew and Eichner, Marcin and Ferrari, Vittorio and Jawahar, C. V.},
booktitle = {European Conference on Computer Vision},
year = {2012},
pages = {114-128},
doi = {10.1007/978-3-642-33712-3_9},
url = {https://mlanthology.org/eccv/2012/jammalamadaka2012eccv-my/}
}