Packed Ensembles for Efficient Uncertainty Estimation
Abstract
Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection. However, hardware limitations of real-world systems constrain to smaller ensembles and lower-capacity networks, significantly deteriorating their performance and properties. We introduce Packed-Ensembles (PE), a strategy to design and train lightweight structured ensembles by carefully modulating the dimension of their encoding space. We leverage grouped convolutions to parallelize the ensemble into a single shared backbone and forward pass to improve training and inference speeds. PE is designed to operate within the memory limits of a standard neural network. Our extensive research indicates that PE accurately preserves the properties of DE, such as diversity, and performs equally well in terms of accuracy, calibration, out-of-distribution detection, and robustness to distribution shift. We make our code available at https://github.com/ENSTA-U2IS/torch-uncertainty.
Cite
Text
Laurent et al. "Packed Ensembles for Efficient Uncertainty Estimation." International Conference on Learning Representations, 2023.Markdown
[Laurent et al. "Packed Ensembles for Efficient Uncertainty Estimation." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/laurent2023iclr-packed/)BibTeX
@inproceedings{laurent2023iclr-packed,
title = {{Packed Ensembles for Efficient Uncertainty Estimation}},
author = {Laurent, Olivier and Lafage, Adrien and Tartaglione, Enzo and Daniel, Geoffrey and Martinez, Jean-marc and Bursuc, Andrei and Franchi, Gianni},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/laurent2023iclr-packed/}
}