Unbiased Scene Graph Generation in Videos

Abstract

The task of dynamic scene graph generation (SGG) from videos is complicated and challenging due to the inherent dynamics of a scene, temporal fluctuation of model predictions, and the long-tailed distribution of the visual relationships in addition to the already existing challenges in image-based SGG. Existing methods for dynamic SGG have primarily focused on capturing spatio-temporal context using complex architectures without addressing the challenges mentioned above, especially the long-tailed distribution of relationships. This often leads to the generation of biased scene graphs. To address these challenges, we introduce a new framework called TEMPURA: TEmporal consistency and Memory Prototype guided UnceRtainty Attenuation for unbiased dynamic SGG. TEMPURA employs object-level temporal consistencies via transformer-based sequence modeling, learns to synthesize unbiased relationship representations using memory-guided training, and attenuates the predictive uncertainty of visual relations using a Gaussian Mixture Model (GMM). Extensive experiments demonstrate that our method achieves significant (up to 10% in some cases) performance gain over existing methods highlight- ing its superiority in generating more unbiased scene graphs. Code: https://github.com/sayaknag/unbiasedSGG.git

Cite

Text

Nag et al. "Unbiased Scene Graph Generation in Videos." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02184

Markdown

[Nag et al. "Unbiased Scene Graph Generation in Videos." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/nag2023cvpr-unbiased/) doi:10.1109/CVPR52729.2023.02184

BibTeX

@inproceedings{nag2023cvpr-unbiased,
  title     = {{Unbiased Scene Graph Generation in Videos}},
  author    = {Nag, Sayak and Min, Kyle and Tripathi, Subarna and Roy-Chowdhury, Amit K.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {22803-22813},
  doi       = {10.1109/CVPR52729.2023.02184},
  url       = {https://mlanthology.org/cvpr/2023/nag2023cvpr-unbiased/}
}