Delving into Multimodal Prompting for Fine-Grained Visual Classification
Abstract
Fine-grained visual classification (FGVC) involves categorizing fine subdivisions within a broader category, which poses challenges due to subtle inter-class discrepancies and large intra-class variations. However, prevailing approaches primarily focus on uni-modal visual concepts. Recent advancements in pre-trained vision-language models have demonstrated remarkable performance in various high-level vision tasks, yet the applicability of such models to FGVC tasks remains uncertain. In this paper, we aim to fully exploit the capabilities of cross-modal description to tackle FGVC tasks and propose a novel multimodal prompting solution, denoted as MP-FGVC, based on the contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a multimodal prompts scheme and a multimodal adaptation scheme. The former includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text Prompt (DaTP), which explicitly highlights the subcategory-specific discrepancies from the perspectives of both vision and language. The latter aligns the vision and text prompting elements in a common semantic space, facilitating cross-modal collaborative reasoning through a Vision-Language Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained CLIP model and expedite efficient adaptation for FGVC. Extensive experiments conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.
Cite
Text
Jiang et al. "Delving into Multimodal Prompting for Fine-Grained Visual Classification." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I3.28034Markdown
[Jiang et al. "Delving into Multimodal Prompting for Fine-Grained Visual Classification." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/jiang2024aaai-delving/) doi:10.1609/AAAI.V38I3.28034BibTeX
@inproceedings{jiang2024aaai-delving,
title = {{Delving into Multimodal Prompting for Fine-Grained Visual Classification}},
author = {Jiang, Xin and Tang, Hao and Gao, Junyao and Du, Xiaoyu and He, Shengfeng and Li, Zechao},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {2570-2578},
doi = {10.1609/AAAI.V38I3.28034},
url = {https://mlanthology.org/aaai/2024/jiang2024aaai-delving/}
}