IFCap: Image-like Retrieval and Frequency-Based Entity Filtering for Zero-Shot Captioning

Abstract

Recent advancements in image captioning have explored text-only training methods to overcome the limitations of paired image-text data. However, existing text-only training methods often overlook the modality gap between using text data during training and employing images during inference. To address this issue, we propose a novel approach called Image-like Retrieval, which aligns text features with visually relevant features to mitigate the modality gap. Our method further enhances the accuracy of generated captions by designing a fusion module that integrates retrieved captions with input features. Additionally, we introduce a Frequency-based Entity Filtering technique that significantly improves caption quality. We integrate these methods into a unified framework, which we refer to as $\textbf{IFCap}$ ($\textbf{I}$mage-like Retrieval and $\textbf{F}$requency-based Entity Filtering for Zero-shot $\textbf{Cap}$tioning). Through extensive experimentation, our straightforward yet powerful approach has demonstrated its efficacy, outperforming the state-of-the-art methods by a significant margin in image captioning compared to zero-shot captioning based on text-only training.

Cite

Text

Lee et al. "IFCap: Image-like Retrieval and Frequency-Based Entity Filtering for Zero-Shot Captioning." NeurIPS 2024 Workshops: AFM, 2024.

Markdown

[Lee et al. "IFCap: Image-like Retrieval and Frequency-Based Entity Filtering for Zero-Shot Captioning." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/lee2024neuripsw-ifcap/)

BibTeX

@inproceedings{lee2024neuripsw-ifcap,
  title     = {{IFCap: Image-like Retrieval and Frequency-Based Entity Filtering for Zero-Shot Captioning}},
  author    = {Lee, Soeun and Kim, Si-Woo and Kim, Taewhan and Kim, Dong-Jin},
  booktitle = {NeurIPS 2024 Workshops: AFM},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/lee2024neuripsw-ifcap/}
}