REFINE: Prediction Fusion Network for Panoptic Segmentation
Abstract
Panoptic segmentation aims at generating pixel-wise class and instance predictions for each pixel in the input image, which is a challenging task and far more complicated than naively fusing the semantic and instance segmentation results. Prediction fusion is therefore important to achieve accurate panoptic segmentation. In this paper, we present REFINE, pREdiction FusIon NEtwork for panoptic segmentation, to achieve high-quality panoptic segmentation by improving cross-task prediction fusion, and within-task prediction fusion. Our single-model ResNeXt-101 with DCN achieves PQ=51.5 on the COCO dataset, surpassing state-of-the-art performance by a convincing margin and is comparable with ensembled models. Our smaller model with a ResNet-50 backbone achieves PQ=44.9, which is comparable with state-of-the-art methods with larger backbones.
Cite
Text
Ren et al. "REFINE: Prediction Fusion Network for Panoptic Segmentation." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I3.16349Markdown
[Ren et al. "REFINE: Prediction Fusion Network for Panoptic Segmentation." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/ren2021aaai-refine/) doi:10.1609/AAAI.V35I3.16349BibTeX
@inproceedings{ren2021aaai-refine,
title = {{REFINE: Prediction Fusion Network for Panoptic Segmentation}},
author = {Ren, Jiawei and Yu, Cunjun and Cai, Zhongang and Zhang, Mingyuan and Chen, Chongsong and Zhao, Haiyu and Yi, Shuai and Li, Hongsheng},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {2477-2485},
doi = {10.1609/AAAI.V35I3.16349},
url = {https://mlanthology.org/aaai/2021/ren2021aaai-refine/}
}