IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation

Abstract

Significant advances have been made in human-centric video generation, yet the joint video-depth generation problem remains underexplored. Most existing monocular depth estimation methods may not generalize well to synthesized images or videos, and multi-view-based methods have difficulty controlling the human appearance and motion. In this work, we present () for high-quality human-centric joint video-depth generation. Our consists of two novel designs. First, to enable dual-modal generation and maximize the information exchange between video and depth generation, we propose a unified dual-modal U-Net, a parameter-sharing framework for joint video and depth denoising, wherein a modality label guides the denoising target, and cross-modal attention enables the mutual information flow. Second, to ensure a precise video-depth spatial alignment, we propose a motion consistency loss that enforces consistency between the video and depth feature motion fields, leading to harmonized outputs. Additionally, a cross-attention map consistency loss is applied to align the cross-attention map of the video denoising with that of the depth denoising, further facilitating spatial alignment. Extensive experiments on the TikTok and NTU120 datasets show our superior performance, significantly surpassing existing methods in terms of video FVD and depth accuracy.

Cite

Text

Zhai et al. "IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72633-0_8

Markdown

[Zhai et al. "IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/zhai2024eccv-idol/) doi:10.1007/978-3-031-72633-0_8

BibTeX

@inproceedings{zhai2024eccv-idol,
  title     = {{IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation}},
  author    = {Zhai, Yuanhao and Lin, Kevin and Li, Linjie and Lin, Chung-Ching and Wang, Jianfeng and Yang, Zhengyuan and Doermann, David and Yuan, Junsong and Liu, Zicheng and Wang, Lijuan},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72633-0_8},
  url       = {https://mlanthology.org/eccv/2024/zhai2024eccv-idol/}
}