ActAnywhere: Subject-Aware Video Background Generation

Abstract

We study a novel problem to automatically generate video background that tailors to foreground subject motion. It is an important problem for the movie industry and visual effects community, which traditionally requires tedious manual efforts to solve. To this end, we propose ActAnywhere, a video diffusion model that takes as input a sequence of foreground subject segmentation and an image of a novel background and generates a video of the subject interacting in this background. We train our model on a large-scale dataset of 2.4M videos of human-scene interactions. Through extensive evaluation, we show that our model produces videos with realistic foreground-background interaction while strictly following the guidance of the condition image. Our model generalizes to diverse scenarios including non-human subjects, gaming and animation clips, as well as videos with multiple moving subjects. Both quantitative and qualitative comparisons demonstrate that our model significantly outperforms existing methods, which fail to accomplish the studied task. Please visit our project webpage at https://actanywhere.github.io.

Cite

Text

Pan et al. "ActAnywhere: Subject-Aware Video Background Generation." Neural Information Processing Systems, 2024. doi:10.52202/079017-0936

Markdown

[Pan et al. "ActAnywhere: Subject-Aware Video Background Generation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/pan2024neurips-actanywhere/) doi:10.52202/079017-0936

BibTeX

@inproceedings{pan2024neurips-actanywhere,
  title     = {{ActAnywhere: Subject-Aware Video Background Generation}},
  author    = {Pan, Boxiao and Xu, Zhan and Huang, Chun-Hao Paul and Singh, Krishna Kumar and Zhou, Yang and Guibas, Leonidas J. and Yang, Jimei},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0936},
  url       = {https://mlanthology.org/neurips/2024/pan2024neurips-actanywhere/}
}