Occlusion-Aware Seamless Segmentation
Abstract
Panoramic images can broaden the Field of View (FoV), occlusion-aware prediction can deepen the understanding of the scene, and domain adaptation can transfer across viewing domains. In this work, we introduce a novel task, Occlusion-Aware Seamless Segmentation (OASS), which simultaneously tackles all these three challenges. For benchmarking OASS, we establish a new human-annotated dataset for Blending Panoramic Amodal Seamless Segmentation, , BlendPASS. Besides, we propose the first solution UnmaskFormer, aiming at unmasking the narrow FoV, occlusions, and domain gaps all at once. Specifically, UnmaskFormer includes the crucial designs of Unmasking Attention (UA) and Amodal-oriented Mix (AoMix). Our method achieves state-of-the-art performance on the BlendPASS dataset, reaching a remarkable mAPQ of 26.58% and mIoU of 43.66%. On public panoramic semantic segmentation datasets, , SynPASS and DensePASS, our method outperforms previous methods and obtains 45.34% and 48.08% in mIoU, respectively. The fresh BlendPASS dataset and our source code are available at https://github.com/yihong-97/OASS.
Cite
Text
Cao et al. "Occlusion-Aware Seamless Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72655-2_8Markdown
[Cao et al. "Occlusion-Aware Seamless Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/cao2024eccv-occlusionaware/) doi:10.1007/978-3-031-72655-2_8BibTeX
@inproceedings{cao2024eccv-occlusionaware,
title = {{Occlusion-Aware Seamless Segmentation}},
author = {Cao, Yihong and Zhang, Jiaming and Shi, Hao and Peng, Kunyu and Zhang, Yuhongxuan and Zhang, Hui and Stiefelhagen, Rainer and Yang, Kailun},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72655-2_8},
url = {https://mlanthology.org/eccv/2024/cao2024eccv-occlusionaware/}
}