Long-Term Rhythmic Video Soundtracker

Abstract

We consider the problem of generating musical soundtracks in sync with rhythmic visual cues. Most existing works rely on pre-defined music representations, leading to the incompetence of generative flexibility and complexity. Other methods directly generating video-conditioned waveforms suffer from limited scenarios, short lengths, and unstable generation quality. To this end, we present Long-Term Rhythmic Video Soundtracker (LORIS), a novel framework to synthesize long-term conditional waveforms. Specifically, our framework consists of a latent conditional diffusion probabilistic model to perform waveform synthesis. Furthermore, a series of context-aware conditioning encoders are proposed to take temporal information into consideration for a long-term generation. Notably, we extend our model’s applicability from dances to multiple sports scenarios such as floor exercise and figure skating. To perform comprehensive evaluations, we establish a benchmark for rhythmic video soundtracks including the pre-processed dataset, improved evaluation metrics, and robust generative baselines. Extensive experiments show that our model generates long-term soundtracks with state-of-the-art musical quality and rhythmic correspondence. Codes are available at https://github.com/OpenGVLab/LORIS.

Cite

Text

Yu et al. "Long-Term Rhythmic Video Soundtracker." International Conference on Machine Learning, 2023.

Markdown

[Yu et al. "Long-Term Rhythmic Video Soundtracker." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/yu2023icml-longterm/)

BibTeX

@inproceedings{yu2023icml-longterm,
  title     = {{Long-Term Rhythmic Video Soundtracker}},
  author    = {Yu, Jiashuo and Wang, Yaohui and Chen, Xinyuan and Sun, Xiao and Qiao, Yu},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {40339-40353},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/yu2023icml-longterm/}
}