Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory

Abstract

This paper describes a method based on deep metric learning for content-based cross-modal retrieval of a piece of music and its representative image (i.e., a music audio signal and its cover art image). We train music and image encoders so that the embeddings of a positive music-image pair lie close to each other, while those of a random pair lie far from each other, in a shared embedding space. Furthermore, we propose a mechanism called self- and cross-modal feature embedding memory, which stores both the music and image embeddings of any previous iterations in memory and enables the encoders to mine informative pairs for training. To perform such training, we constructed a dataset containing 78,325 music-image pairs. We demonstrate the effectiveness of the proposed mechanism on this dataset: specifically, our mechanism outperforms baseline methods by 1.93 3.38 times for the mean reciprocal rank, 2.19 3.56 times for recall@50, and 528 891 ranks for the median rank.

Cite

Text

Nakatsuka et al. "Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Nakatsuka et al. "Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/nakatsuka2023wacv-contentbased/)

BibTeX

@inproceedings{nakatsuka2023wacv-contentbased,
  title     = {{Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory}},
  author    = {Nakatsuka, Takayuki and Hamasaki, Masahiro and Goto, Masataka},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {2174-2184},
  url       = {https://mlanthology.org/wacv/2023/nakatsuka2023wacv-contentbased/}
}