Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation

Abstract

Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels, which ignores the correlations among them. To leverage the multi-modalities, we propose a deep convolution encoder-decoder structure with fusion layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM (convLSTM) to model a sequence of 2D slices, and jointly learn the multi-modalities and convLSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches.

Cite

Text

Tseng et al. "Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.398

Markdown

[Tseng et al. "Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/tseng2017cvpr-joint/) doi:10.1109/CVPR.2017.398

BibTeX

@inproceedings{tseng2017cvpr-joint,
  title     = {{Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation}},
  author    = {Tseng, Kuan-Lun and Lin, Yen-Liang and Hsu, Winston and Huang, Chung-Yang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.398},
  url       = {https://mlanthology.org/cvpr/2017/tseng2017cvpr-joint/}
}