VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining

Abstract

Assessing the aesthetics of an image is challenging, as it is influenced by multiple factors including composition, color, style, and high-level semantics. Existing image aesthetic assessment (IAA) methods primarily rely on human-labeled rating scores, which oversimplify the visual aesthetic information that humans perceive. Conversely, user comments offer more comprehensive information and are a more natural way to express human opinions and preferences regarding image aesthetics. In light of this, we propose learning image aesthetics from user comments, and exploring vision-language pretraining methods to learn multimodal aesthetic representations. Specifically, we pretrain an image-text encoder-decoder model with image-comment pairs, using contrastive and generative objectives to learn rich and generic aesthetic semantics without human labels. To efficiently adapt the pretrained model for downstream IAA tasks, we further propose a lightweight rank-based adapter that employs text as an anchor to learn the aesthetic ranking concept. Our results show that our pretrained aesthetic vision-language model outperforms prior works on image aesthetic captioning over the AVA-Captions dataset, and it has powerful zero-shot capability for aesthetic tasks such as zero-shot style classification and zero-shot IAA, surpassing many supervised baselines. With only minimal finetuning parameters using the proposed adapter module, our model achieves state-of-the-art IAA performance over the AVA dataset.

Cite

Text

Ke et al. "VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00968

Markdown

[Ke et al. "VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/ke2023cvpr-vila/) doi:10.1109/CVPR52729.2023.00968

BibTeX

@inproceedings{ke2023cvpr-vila,
  title     = {{VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining}},
  author    = {Ke, Junjie and Ye, Keren and Yu, Jiahui and Wu, Yonghui and Milanfar, Peyman and Yang, Feng},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {10041-10051},
  doi       = {10.1109/CVPR52729.2023.00968},
  url       = {https://mlanthology.org/cvpr/2023/ke2023cvpr-vila/}
}