Visual Sync: Multi‑Camera Synchronization via Cross‑View Object Motion
Abstract
Today, people can easily record memorable moments, ranging from concerts, sports events, lectures, family gatherings, and birthday parties with multiple consumer cameras. However, synchronizing these cross‑camera streams remains challenging. Existing methods assume controlled settings, specific targets, manual correction, or costly hardware. We present VisualSync, an optimization framework based on multi‑view dynamics that aligns unposed, unsynchronized videos at millisecond accuracy. Our key insight is that any moving 3D point, when co‑visible in two cameras, obeys epipolar constraints once properly synchronized. To exploit this, VisualSync leverages off‑the‑shelf 3D reconstruction, feature matching, and dense tracking to extract tracklets, relative poses, and cross‑view correspondences. It then jointly minimizes the epipolar error to estimate each camera’s time offset. Experiments on four diverse, challenging datasets show that VisualSync outperforms baseline methods, achieving an average synchronization error below 130 ms.
Cite
Text
Liu et al. "Visual Sync: Multi‑Camera Synchronization via Cross‑View Object Motion." Advances in Neural Information Processing Systems, 2025.Markdown
[Liu et al. "Visual Sync: Multi‑Camera Synchronization via Cross‑View Object Motion." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-visual/)BibTeX
@inproceedings{liu2025neurips-visual,
title = {{Visual Sync: Multi‑Camera Synchronization via Cross‑View Object Motion}},
author = {Liu, Shaowei and Yao, David Yifan and Gupta, Saurabh and Wang, Shenlong},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/liu2025neurips-visual/}
}