Texture Synthesis for Realistic-Looking Virtual Colonoscopy Using Mask-Aware Transformer
Abstract
In virtual colonoscopy, computer vision techniques focus on depth estimation, photometric tracking, and simultaneous localization and mapping (SLAM). To narrow the domain gap between virtual and real colonoscopy data, it is necessary to utilize real-world data or employ realistic-looking virtual dataset. We introduce a texture synthesis and outpainting strategy using the Mask-aware-transformer. The method can generate textures for the inner surface suitable for virtual colonoscopy, including realistic-looking, controllable, and variety of synthesized textures. We generated RGB-D dataset employing the generated virtual colonoscopy, resulting in 9 video recordings. Each sequence was generated from distinct colon models, accumulating a total of 14,120 frames, paired with ground truth depth. Evaluating the generalizability across various datasets, the depth estimation model trained on our dataset exhibited superior transfer performance.
Cite
Text
Jang et al. "Texture Synthesis for Realistic-Looking Virtual Colonoscopy Using Mask-Aware Transformer." NeurIPS 2023 Workshops: DGM4H, 2023.Markdown
[Jang et al. "Texture Synthesis for Realistic-Looking Virtual Colonoscopy Using Mask-Aware Transformer." NeurIPS 2023 Workshops: DGM4H, 2023.](https://mlanthology.org/neuripsw/2023/jang2023neuripsw-texture/)BibTeX
@inproceedings{jang2023neuripsw-texture,
title = {{Texture Synthesis for Realistic-Looking Virtual Colonoscopy Using Mask-Aware Transformer}},
author = {Jang, Seunghyun and Kim, Yisak and Lee, Dongheon and Park, Chang Min},
booktitle = {NeurIPS 2023 Workshops: DGM4H},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/jang2023neuripsw-texture/}
}