Pixel-Pair Occlusion Relationship mAP (P2ORM): Formulation, Inference & Application

Abstract

Inference & Application","We formalize concepts around geometric occlusion in 2D images (i.e., ignoring semantics), and propose a novel unified formulation of both occlusion boundaries and occlusion orientations via a pixel-pair occlusion relation. The former provides a way to generate large-scale accurate occlusion datasets while, based on the latter, we propose a novel method for task-independent pixel-level occlusion relationship estimation from single images. Experiments on a variety of datasets demonstrate that our method outperforms existing ones on this task. To further illustrate the value of our formulation, we also propose a new depth map refinement method that consistently improve the performance of state-of-the-art monocular depth estimation methods.

Cite

Text

Qiu et al. "Pixel-Pair Occlusion Relationship mAP (P2ORM): Formulation, Inference & Application." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58548-8_40

Markdown

[Qiu et al. "Pixel-Pair Occlusion Relationship mAP (P2ORM): Formulation, Inference & Application." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/qiu2020eccv-pixelpair/) doi:10.1007/978-3-030-58548-8_40

BibTeX

@inproceedings{qiu2020eccv-pixelpair,
  title     = {{Pixel-Pair Occlusion Relationship mAP (P2ORM): Formulation, Inference & Application}},
  author    = {Qiu, Xuchong and Xiao, Yang and Wang, Chaohui and Marlet, Renaud},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58548-8_40},
  url       = {https://mlanthology.org/eccv/2020/qiu2020eccv-pixelpair/}
}