Learning Semantic Associations for Mirror Detection

Abstract

Mirrors generally lack a consistent visual appearance, making mirror detection very challenging. Although recent works that are based on exploiting contextual contrasts and corresponding relations have achieved good results, heavily relying on contextual contrasts and corresponding relations to discover mirrors tend to fail in complex real-world scenes, where a lot of objects, e.g., doorways, may have similar features as mirrors. We observe that humans tend to place mirrors in relation to certain objects for specific functional purposes, e.g., a mirror above the sink. Inspired by this observation, we propose a model to exploit the semantic associations between the mirror and its surrounding objects for a reliable mirror localization. Our model first acquires class-specific knowledge of the surrounding objects via a semantic side-path. It then uses two novel modules to exploit semantic associations: 1) an Associations Exploration (AE) Module to extract the associations of the scene objects based on fully connected graph models, and 2) a Quadruple-Graph (QG) Module to facilitate the diffusion and aggregation of semantic association knowledge using graph convolutions. Extensive experiments show that our method outperforms the existing methods and sets the new state-of-the-art on both PMD dataset (f-measure: 0.844) and MSD dataset (f-measure: 0.889).

Cite

Text

Guan et al. "Learning Semantic Associations for Mirror Detection." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00585

Markdown

[Guan et al. "Learning Semantic Associations for Mirror Detection." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/guan2022cvpr-learning/) doi:10.1109/CVPR52688.2022.00585

BibTeX

@inproceedings{guan2022cvpr-learning,
  title     = {{Learning Semantic Associations for Mirror Detection}},
  author    = {Guan, Huankang and Lin, Jiaying and Lau, Rynson W.H.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {5941-5950},
  doi       = {10.1109/CVPR52688.2022.00585},
  url       = {https://mlanthology.org/cvpr/2022/guan2022cvpr-learning/}
}