CNN Models' Sensitivity to Numerosity Concepts
Abstract
The nature of number is a classic question in the philosophy of mathematics. Cognitive scientists have shown that numbers are mentally represented as magnitudes organized as a mental number line (MNL). Here we ask whether CNN models, in learning to classify images, also learn about number and numerosity ‘for free’. This was the case. A representative model showed the distance, size, and ratio effects that are the signatures of magnitude representations in humans. An MDS analysis of their latent representations found a close resemblance to the MNL documented in people. These findings challenge the developmental science proposal that numbers are part of the ‘core knowledge’ that all human infants possess, and instead serve as an existence proof of the learnability of numerical concepts.
Cite
Text
Upadhyay and Varma. "CNN Models' Sensitivity to Numerosity Concepts." NeurIPS 2023 Workshops: MATH-AI, 2023.Markdown
[Upadhyay and Varma. "CNN Models' Sensitivity to Numerosity Concepts." NeurIPS 2023 Workshops: MATH-AI, 2023.](https://mlanthology.org/neuripsw/2023/upadhyay2023neuripsw-cnn/)BibTeX
@inproceedings{upadhyay2023neuripsw-cnn,
title = {{CNN Models' Sensitivity to Numerosity Concepts}},
author = {Upadhyay, Neha and Varma, Sashank},
booktitle = {NeurIPS 2023 Workshops: MATH-AI},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/upadhyay2023neuripsw-cnn/}
}