Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation

Abstract

A versatile medical image segmentation model applicable to images acquired with diverse equipment and protocols can facilitate model deployment and maintenance. However building such a model typically demands a large diverse and fully annotated dataset which is challenging to obtain due to the labor-intensive nature of data curation. To address this challenge we propose a cost-effective alternative that harnesses multi-source data with only partial or sparse segmentation labels for training substantially reducing the cost of developing a versatile model. We devise strategies for model self-disambiguation prior knowledge incorporation and imbalance mitigation to tackle challenges associated with inconsistently labeled multi-source data including label ambiguity and modality dataset and class imbalances. Experimental results on a multi-modal dataset compiled from eight different sources for abdominal structure segmentation have demonstrated the effectiveness and superior performance of our method compared to state-of-the-art alternative approaches. We anticipate that its cost-saving features which optimize the utilization of existing annotated data and reduce annotation efforts for new data will have a significant impact in the field.

Cite

Text

Chen et al. "Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01116

Markdown

[Chen et al. "Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/chen2024cvpr-versatile/) doi:10.1109/CVPR52733.2024.01116

BibTeX

@inproceedings{chen2024cvpr-versatile,
  title     = {{Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation}},
  author    = {Chen, Xiaoyang and Zheng, Hao and Li, Yuemeng and Ma, Yuncong and Ma, Liang and Li, Hongming and Fan, Yong},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {11747-11756},
  doi       = {10.1109/CVPR52733.2024.01116},
  url       = {https://mlanthology.org/cvpr/2024/chen2024cvpr-versatile/}
}