LoVA: Long-Form Video-to-Audio Generation

Abstract

Video-to-audio (V2A) generation is important for video editing and post-processing, enabling the creation of semantics-aligned audio for silent video. However, most existing methods focus on generating short-form audio for short video segment (less than 10 seconds), while giving little attention to the scenario of long-form video inputs. For current UNet-based diffusion V2A models, an inevitable problem when handling long-form audio generation is the inconsistencies within the final concatenated audio. In this paper, we first highlight the importance of long-form V2A. Besides, we propose LoVA, a novel model for Long-form Video-to-Audio generation. Based on Diffusion Transformer (DiT) architecture, LoVA proves to be more effective at generating long-form audio compared to existing autoregressive models and UNet-based diffusion models. Extensive experiments demonstrate that LoVA achieves comparable performance on 10-second V2A benchmark and outperforms all other baselines on a benchmark with long-form video input.

Cite

Text

Cheng et al. "LoVA: Long-Form Video-to-Audio Generation." NeurIPS 2024 Workshops: Audio_Imagination, 2024.

Markdown

[Cheng et al. "LoVA: Long-Form Video-to-Audio Generation." NeurIPS 2024 Workshops: Audio_Imagination, 2024.](https://mlanthology.org/neuripsw/2024/cheng2024neuripsw-lova/)

BibTeX

@inproceedings{cheng2024neuripsw-lova,
  title     = {{LoVA: Long-Form Video-to-Audio Generation}},
  author    = {Cheng, Xin and Wang, Xihua and Wu, Yihan and Wang, Yuyue and Song, Ruihua},
  booktitle = {NeurIPS 2024 Workshops: Audio_Imagination},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/cheng2024neuripsw-lova/}
}