Bugs in the Data: How ImageNet Misrepresents Biodiversity
DOI:
https://doi.org/10.1609/aaai.v37i12.26682Keywords:
GeneralAbstract
ImageNet-1k is a dataset often used for benchmarking machine learning (ML) models and evaluating tasks such as image recognition and object detection. Wild animals make up 27% of ImageNet-1k but, unlike classes representing people and objects, these data have not been closely scrutinized. In the current paper, we analyze the 13,450 images from 269 classes that represent wild animals in the ImageNet-1k validation set, with the participation of expert ecologists. We find that many of the classes are ill-defined or overlapping, and that 12% of the images are incorrectly labeled, with some classes having >90% of images incorrect. We also find that both the wildlife-related labels and images included in ImageNet-1k present significant geographical and cultural biases, as well as ambiguities such as artificial animals, multiple species in the same image, or the presence of humans. Our findings highlight serious issues with the extensive use of this dataset for evaluating ML systems, the use of such algorithms in wildlife-related tasks, and more broadly the ways in which ML datasets are commonly created and curated.Downloads
Published
2023-06-26
How to Cite
Luccioni, A. S., & Rolnick, D. (2023). Bugs in the Data: How ImageNet Misrepresents Biodiversity. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14382-14390. https://doi.org/10.1609/aaai.v37i12.26682
Issue
Section
AAAI Special Track on AI for Social Impact