From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
Abstract
Building rich machine learning datasets in a scalable manner often necessitates a crowd-sourced data collection pipeline. In this work, we use human studies to investigate the consequences of employing such a pipeline, focusing on the popular ImageNet dataset. We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset—including the introduction of biases that state-of-the-art models exploit. Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for. Finally, our findings emphasize the need to augment our current model training and evaluation toolkit to take such misalignment into account.
Cite
Text
Tsipras et al. "From ImageNet to Image Classification: Contextualizing Progress on Benchmarks." International Conference on Machine Learning, 2020.Markdown
[Tsipras et al. "From ImageNet to Image Classification: Contextualizing Progress on Benchmarks." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/tsipras2020icml-imagenet/)BibTeX
@inproceedings{tsipras2020icml-imagenet,
title = {{From ImageNet to Image Classification: Contextualizing Progress on Benchmarks}},
author = {Tsipras, Dimitris and Santurkar, Shibani and Engstrom, Logan and Ilyas, Andrew and Madry, Aleksander},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {9625-9635},
volume = {119},
url = {https://mlanthology.org/icml/2020/tsipras2020icml-imagenet/}
}