Test Sample Accuracy Scales with Training Sample Density in Neural Networks
Abstract
Intuitively, one would expect accuracy of a trained neural network’s prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network’s prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region’s error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds.
Cite
Text
Ji et al. "Test Sample Accuracy Scales with Training Sample Density in Neural Networks." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.Markdown
[Ji et al. "Test Sample Accuracy Scales with Training Sample Density in Neural Networks." Proceedings of The 1st Conference on Lifelong Learning Agents, 2022.](https://mlanthology.org/collas/2022/ji2022collas-test/)BibTeX
@inproceedings{ji2022collas-test,
title = {{Test Sample Accuracy Scales with Training Sample Density in Neural Networks}},
author = {Ji, Xu and Pascanu, Razvan and Hjelm, R. Devon and Lakshminarayanan, Balaji and Vedaldi, Andrea},
booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents},
year = {2022},
pages = {629-646},
volume = {199},
url = {https://mlanthology.org/collas/2022/ji2022collas-test/}
}