Quantifying Sources of Uncertainty in Deep Learning-Based Image Reconstruction
Abstract
Image reconstruction methods based on deep neural networks have shown outstanding performance, equalling or exceeding the state-of-the-art of conventional approaches, but often do not provide uncertainty information of the reconstruction. In this work we propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction. We build on a Bayesian deep gradient descent method for quantifying epistemic uncertainty, and incorporate heteroscedastic variance of the noise to account for the aleatoric uncertainty. We show that it exhibits competitive performance with respect to conventional benchmarks for computed tomography with both sparse view and limited angle data. The estimated uncertainty captures the variability in the reconstructions, caused by the restricted measurement model, and by missing information, due to the limited angle geometry.
Cite
Text
Barbano et al. "Quantifying Sources of Uncertainty in Deep Learning-Based Image Reconstruction." NeurIPS 2020 Workshops: Deep_Inverse, 2020.Markdown
[Barbano et al. "Quantifying Sources of Uncertainty in Deep Learning-Based Image Reconstruction." NeurIPS 2020 Workshops: Deep_Inverse, 2020.](https://mlanthology.org/neuripsw/2020/barbano2020neuripsw-quantifying/)BibTeX
@inproceedings{barbano2020neuripsw-quantifying,
title = {{Quantifying Sources of Uncertainty in Deep Learning-Based Image Reconstruction}},
author = {Barbano, Riccardo and Kereta, Zeljko and Zhang, Chen and Hauptmann, Andreas and Arridge, Simon and Jin, Bangti},
booktitle = {NeurIPS 2020 Workshops: Deep_Inverse},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/barbano2020neuripsw-quantifying/}
}