Learning to Segment Breast Biopsy Whole Slide Images

Abstract

We trained and applied an encoder-decoder model to semantically segment breast biopsy images into biologically meaningful tissue labels. Since conventional encoderdecoder networks cannot be applied directly on large biopsy images and the different sized structures in biopsies present novel challenges, we propose four modifications: (1) an input-aware encoding block to compensate for information loss, (2) a new dense connection pattern between encoder and decoder, (3) dense and sparse decoders to combine multi-level features, (4) a multi-resolution network that fuses the results of encoder-decoders run on different resolutions. Our model outperforms a feature-based approach and conventional encoder-decoders from the literature. We use semantic segmentations produced with our model in an automated diagnosis task and obtain higher accuracies than a baseline approach that employs an SVM for featurebased segmentation, both using the same segmentationbased diagnostic features.

Cite

Text

Mehta et al. "Learning to Segment Breast Biopsy Whole Slide Images." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00078

Markdown

[Mehta et al. "Learning to Segment Breast Biopsy Whole Slide Images." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/mehta2018wacv-learning/) doi:10.1109/WACV.2018.00078

BibTeX

@inproceedings{mehta2018wacv-learning,
  title     = {{Learning to Segment Breast Biopsy Whole Slide Images}},
  author    = {Mehta, Sachin and Mercan, Ezgi and Bartlett, Jamen and Weaver, Donald L. and Elmore, Joann G. and Shapiro, Linda G.},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2018},
  pages     = {663-672},
  doi       = {10.1109/WACV.2018.00078},
  url       = {https://mlanthology.org/wacv/2018/mehta2018wacv-learning/}
}