Music Grounding by Short Video

Abstract

Adding proper background music helps complete a short video to be shared. Previous work tackles the task by video-to-music retrieval (V2MR), aiming to find the most suitable music track from a collection to match the content of a given query video. In practice, however, music tracks are typically much longer than the query video, necessitating (manual) trimming of the retrieved music to a shorter segment that matches the video duration. In order to bridge the gap between the practical need for music moment localization and V2MR, we propose a new task termed Music Grounding by Short Video (MGSV). To tackle the new task, we introduce a new benchmark, MGSV-EC, which comprises a diverse set of 53k short videos associated with 35k different music moments from 4k unique music tracks. Furthermore, we develop a new baseline method, MaDe, which performs both video-to-music matching and music moment detection within a unified end-to-end deep network. Extensive experiments on MGSV-EC not only highlight the challenging nature of MGSV but also set MaDe as a strong baseline.

Cite

Text

Xin et al. "Music Grounding by Short Video." International Conference on Computer Vision, 2025.

Markdown

[Xin et al. "Music Grounding by Short Video." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/xin2025iccv-music/)

BibTeX

@inproceedings{xin2025iccv-music,
  title     = {{Music Grounding by Short Video}},
  author    = {Xin, Zijie and Wang, Minquan and Liu, Jingyu and Chen, Quan and Ma, Ye and Jiang, Peng and Li, Xirong},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {22285-22293},
  url       = {https://mlanthology.org/iccv/2025/xin2025iccv-music/}
}