Relational Context Learning for Human-Object Interaction Detection

Abstract

Recent state-of-the-art methods for HOI detection typically build on transformer architectures with two decoder branches, one for human-object pair detection and the other for interaction classification. Such disentangled transformers, however, may suffer from insufficient context exchange between the branches and lead to a lack of context information for relational reasoning, which is critical in discovering HOI instances. In this work, we propose the multiplex relation network (MUREN) that performs rich context exchange between three decoder branches using unary, pairwise, and ternary relations of human, object, and interaction tokens. The proposed method learns comprehensive relational contexts for discovering HOI instances, achieving state-of-the-art performance on two standard benchmarks for HOI detection, HICO-DET and V-COCO.

Cite

Text

Kim et al. "Relational Context Learning for Human-Object Interaction Detection." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00286

Markdown

[Kim et al. "Relational Context Learning for Human-Object Interaction Detection." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/kim2023cvpr-relational/) doi:10.1109/CVPR52729.2023.00286

BibTeX

@inproceedings{kim2023cvpr-relational,
  title     = {{Relational Context Learning for Human-Object Interaction Detection}},
  author    = {Kim, Sanghyun and Jung, Deunsol and Cho, Minsu},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {2925-2934},
  doi       = {10.1109/CVPR52729.2023.00286},
  url       = {https://mlanthology.org/cvpr/2023/kim2023cvpr-relational/}
}