RCA-NOC: Relative Contrastive Alignment for Novel Object Captioning

Abstract

In this paper, we introduce a novel approach to novel object captioning which employs relative contrastive learning to learn visual and semantic alignment. Our approach maximizes compatibility between regions and object tags in a contrastive manner. To set up a proper contrastive learning objective, for each image, we augment tags by leveraging the relative nature of positive and negative pairs obtained from foundation models such as CLIP. We then use the rank of each augmented tag in a list as a relative relevance label to contrast each top-ranked tag with a set of lower-ranked tags. This learning objective encourages the top-ranked tags to be more compatible with their image and text context than lower-ranked tags, thus improving the discriminative ability of the learned multi-modality representation. We evaluate our approach on two datasets and show that our proposed RCA-NOC approach outperforms state-of-the-art methods by a large margin, demonstrating its effectiveness in improving vision-language representation for novel object captioning.

Cite

Text

Fan et al. "RCA-NOC: Relative Contrastive Alignment for Novel Object Captioning." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01422

Markdown

[Fan et al. "RCA-NOC: Relative Contrastive Alignment for Novel Object Captioning." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/fan2023iccv-rcanoc/) doi:10.1109/ICCV51070.2023.01422

BibTeX

@inproceedings{fan2023iccv-rcanoc,
  title     = {{RCA-NOC: Relative Contrastive Alignment for Novel Object Captioning}},
  author    = {Fan, Jiashuo and Liang, Yaoyuan and Liu, Leyao and Huang, Shaolun and Zhang, Lei},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {15510-15520},
  doi       = {10.1109/ICCV51070.2023.01422},
  url       = {https://mlanthology.org/iccv/2023/fan2023iccv-rcanoc/}
}