Fixing Overconfidence in Dynamic Neural Networks
Abstract
Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs. In this way, the model can adjust to a limited computational budget. However, the poor quality of uncertainty estimates in deep learning models makes it difficult to distinguish between hard and easy samples. To address this challenge, we present a computationally efficient approach for post-hoc uncertainty quantification in dynamic neural networks. We show that adequately quantifying and accounting for both aleatoric and epistemic uncertainty through a probabilistic treatment of the last layers improves the predictive performance and aids decision-making when determining the computational budget. In the experiments, we show improvements on CIFAR-100, ImageNet, and Caltech-256 in terms of accuracy, capturing uncertainty, and calibration error.
Cite
Text
Meronen et al. "Fixing Overconfidence in Dynamic Neural Networks." Winter Conference on Applications of Computer Vision, 2024.Markdown
[Meronen et al. "Fixing Overconfidence in Dynamic Neural Networks." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/meronen2024wacv-fixing/)BibTeX
@inproceedings{meronen2024wacv-fixing,
title = {{Fixing Overconfidence in Dynamic Neural Networks}},
author = {Meronen, Lassi and Trapp, Martin and Pilzer, Andrea and Yang, Le and Solin, Arno},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2024},
pages = {2680-2690},
url = {https://mlanthology.org/wacv/2024/meronen2024wacv-fixing/}
}