Memory-Efficient Learning for Large-Scale Computational Imaging

Abstract

Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems. Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of classical physics-based reconstructions (termed physics-based networks). However, for real-world large-scale systems, computing gradients via backpropagation restricts learning due to memory limitations of graphical processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging. We demonstrate our methods practicality on two large-scale systems: super-resolution optical microscopy and multi-channel magnetic resonance imaging.

Cite

Text

Kellman et al. "Memory-Efficient Learning for Large-Scale Computational Imaging." NeurIPS 2019 Workshops: Deep_Inverse, 2019.

Markdown

[Kellman et al. "Memory-Efficient Learning for Large-Scale Computational Imaging." NeurIPS 2019 Workshops: Deep_Inverse, 2019.](https://mlanthology.org/neuripsw/2019/kellman2019neuripsw-memoryefficient/)

BibTeX

@inproceedings{kellman2019neuripsw-memoryefficient,
  title     = {{Memory-Efficient Learning for Large-Scale Computational Imaging}},
  author    = {Kellman, Michael and Tamir, Jon and Bostan, Emrah and Lustig, Michael and Waller, Laura},
  booktitle = {NeurIPS 2019 Workshops: Deep_Inverse},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/kellman2019neuripsw-memoryefficient/}
}