Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos

Abstract

Recent advancements in 4D scene reconstruction using neural radiance fields (NeRF) have demonstrated the ability to represent dynamic scenes from multi-view videos. However, they fail to reconstruct the dynamic scenes and struggle to fit even the training views in unsynchronized settings. It happens because they employ a single latent embedding for a frame while the multi-view images at the same frame were actually captured at different moments. To address this limitation, we introduce time offsets for individual unsynchronized videos and jointly optimize the offsets with NeRF. By design, our method is applicable for various baselines and improves them with large margins. Furthermore, finding the offsets always works as synchronizing the videos without manual effort. Experiments are conducted on the common Plenoptic Video Dataset and a newly built Unsynchronized Dynamic Blender Dataset to verify the performance of our method. Project page: https://seoha-kim.github.io/sync-nerf

Cite

Text

Kim et al. "Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I3.28057

Markdown

[Kim et al. "Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kim2024aaai-sync/) doi:10.1609/AAAI.V38I3.28057

BibTeX

@inproceedings{kim2024aaai-sync,
  title     = {{Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos}},
  author    = {Kim, Seoha and Bae, Jeongmin and Yun, Youngsik and Lee, Hahyun and Bang, Gun and Uh, Youngjung},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {2777-2785},
  doi       = {10.1609/AAAI.V38I3.28057},
  url       = {https://mlanthology.org/aaai/2024/kim2024aaai-sync/}
}