Greedy Learning for Large-Scale Neural MRI Reconstruction
Abstract
Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction. These methods unroll iterative proximal gradient descent by alternating between data-consistency and a neural-network based proximal operation. However, they demand several unrolled iterations with sufficiently expressive proximals for high resolution and multi-dimensional imaging (e.g., 3D MRI). This impedes traditional training via backpropagation due to prohibitively intensive memory and compute needed to calculate gradients and store intermediate activations per layer. To address this challenge, we advocate an alternative training method by greedily relaxing the objective. We split the end-to-end network into decoupled network modules, and optimize each network module separately, thereby avoiding the need to compute costly end-to-end gradients. We empirically demonstrate that the proposed greedy learning method requires 6x less memory with no additional computations, while generalizing slightly better than backpropagation.
Cite
Text
Ozturkler et al. "Greedy Learning for Large-Scale Neural MRI Reconstruction." NeurIPS 2021 Workshops: Deep_Inverse, 2021.Markdown
[Ozturkler et al. "Greedy Learning for Large-Scale Neural MRI Reconstruction." NeurIPS 2021 Workshops: Deep_Inverse, 2021.](https://mlanthology.org/neuripsw/2021/ozturkler2021neuripsw-greedy/)BibTeX
@inproceedings{ozturkler2021neuripsw-greedy,
title = {{Greedy Learning for Large-Scale Neural MRI Reconstruction}},
author = {Ozturkler, Batu and Sahiner, Arda and Ergen, Tolga and Desai, Arjun D and Pauly, John M. and Vasanawala, Shreyas and Mardani, Morteza and Pilanci, Mert},
booktitle = {NeurIPS 2021 Workshops: Deep_Inverse},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/ozturkler2021neuripsw-greedy/}
}