Towards Flexible Visual Relationship Segmentation
Abstract
Visual relationship understanding has been studied separately in human-object interaction(HOI) detection, scene graph generation(SGG), and referring relationships(RR) tasks. Given the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner.In this work, we propose FleVRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. FleVRS leverages the synergy between text and image modalities, to ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding.Empirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 $mAP$ on HICO-DET, +11.4 $Acc$ on VRD, +4.7 $mAP$ on unseen HICO-DET.Our FleVRS represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.
Cite
Text
Zhu et al. "Towards Flexible Visual Relationship Segmentation." Neural Information Processing Systems, 2024. doi:10.52202/079017-3419Markdown
[Zhu et al. "Towards Flexible Visual Relationship Segmentation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhu2024neurips-flexible/) doi:10.52202/079017-3419BibTeX
@inproceedings{zhu2024neurips-flexible,
title = {{Towards Flexible Visual Relationship Segmentation}},
author = {Zhu, Fangrui and Yang, Jianwei and Jiang, Huaizu},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3419},
url = {https://mlanthology.org/neurips/2024/zhu2024neurips-flexible/}
}