Evaluating Machine Accuracy on ImageNet

Abstract

We evaluate a wide range of ImageNet models with five trained human labelers. In our year-long experiment, trained humans first annotated 40,000 images from the ImageNet and ImageNetV2 test sets with multi-class labels to enable a semantically coherent evaluation. Then we measured the classification accuracy of the five trained humans on the full task with 1,000 classes. Only the latest models from 2020 are on par with our best human labeler, and human accuracy on the 590 object classes is still 4% and 10% higher than the best model on ImageNet and ImageNetV2, respectively. Moreover, humans achieve the same accuracy on ImageNet and ImageNetV2, while all models see a consistent accuracy drop. Overall, our results show that there is still substantial room for improvement on ImageNet and direct accuracy comparisons between humans and machines may overstate machine performance.

Cite

Text

Shankar et al. "Evaluating Machine Accuracy on ImageNet." NeurIPS 2021 Workshops: ImageNet_PPF, 2021.

Markdown

[Shankar et al. "Evaluating Machine Accuracy on ImageNet." NeurIPS 2021 Workshops: ImageNet_PPF, 2021.](https://mlanthology.org/neuripsw/2021/shankar2021neuripsw-evaluating/)

BibTeX

@inproceedings{shankar2021neuripsw-evaluating,
  title     = {{Evaluating Machine Accuracy on ImageNet}},
  author    = {Shankar, Vaishaal and Roelofs, Rebecca and Mania, Horia and Fang, Alex and Recht, Benjamin and Schmidt, Ludwig},
  booktitle = {NeurIPS 2021 Workshops: ImageNet_PPF},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/shankar2021neuripsw-evaluating/}
}