Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion

Abstract

Deep Learning-based 2D/3D registration enables fast, robust, and accurate X-ray to CT image fusion when large annotated paired datasets are available for training. However, the need for paired CT volume and X-ray images with ground truth registration limits the applicability in interventional scenarios. An alternative is to use simulated X-ray projections from CT volumes, thus removing the need for paired annotated datasets. Deep Neural Networks trained exclusively on simulated X-ray projections can perform significantly worse on real X-ray images due to the domain gap. We propose a self-supervised 2D/3D registration framework combining simulated training with unsupervised feature and pixel space domain adaptation to overcome the domain gap and eliminate the need for paired annotated datasets. Our framework achieves a registration accuracy of 1.83 +-1.16 mm with a high success ratio of 90.1% on real X-ray images showing a 23.9% increase in success ratio compared to reference annotation-free algorithms.

Cite

Text

Jaganathan et al. "Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Jaganathan et al. "Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/jaganathan2023wacv-selfsupervised/)

BibTeX

@inproceedings{jaganathan2023wacv-selfsupervised,
  title     = {{Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion}},
  author    = {Jaganathan, Srikrishna and Kukla, Maximilian and Wang, Jian and Shetty, Karthik and Maier, Andreas},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {2788-2798},
  url       = {https://mlanthology.org/wacv/2023/jaganathan2023wacv-selfsupervised/}
}