Improving Deep Ensembles Without Communication

Abstract

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised deep learning. We propose to improve deep ensembles by optimizing a tighter PAC-Bayesian bound than the most popular ones. Our approach has a number of benefits over previous methods: 1) it requires no communication between ensemble members during training to improve performance and is trivially parallelizable, 2) it results in a simple soft thresholding gradient update that is much simpler than alternatives. Empirically, we outperform competing approaches that try to improve ensembles by encouraging diversity. We report test accuracy gains for MLP, LeNet, and WideResNet architectures, and for a variety of datasets.

Cite

Text

Pitas et al. "Improving Deep Ensembles Without Communication." NeurIPS 2023 Workshops: WANT, 2023.

Markdown

[Pitas et al. "Improving Deep Ensembles Without Communication." NeurIPS 2023 Workshops: WANT, 2023.](https://mlanthology.org/neuripsw/2023/pitas2023neuripsw-improving/)

BibTeX

@inproceedings{pitas2023neuripsw-improving,
  title     = {{Improving Deep Ensembles Without Communication}},
  author    = {Pitas, Konstantinos and Arbel, Michael and Arbel, Julyan},
  booktitle = {NeurIPS 2023 Workshops: WANT},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/pitas2023neuripsw-improving/}
}