Asymmetric Dual-Lens Video Deblurring
Abstract
Modern smartphones often feature asymmetric dual-lens systems, capturing wide-angle and ultra-wide views with complementary perspectives and details. Motion and shake can blur the wide lens, while the ultra-wide lens, despite lower resolution, retains sharper details. This natural complementarity offers valuable cues for video deblurring. However, existing methods focus mainly on single-camera inputs or symmetric stereo pairs, neglecting the cross-lens redundancy in mobile dual-camera systems. In this paper, we propose a practical video deblurring method, AsLeD-Net, which recurrently aligns and propagates temporal reference features from ultra-wide views fused with features extracted from wide-angle blurry frames. AsLeD-Net consists of two key modules: the adaptive local matching (ALM) module, which refines blurry features using $K$-nearest neighbor reference features, and the difference compensation (DC) module, which ensures spatial consistency and reduces misalignment. Additionally, AsLeD-Net uses the reference-guided motion compensation (RMC) module for temporal alignment, further improving frame-to-frame consistency in the deblurring process. We validate the effectiveness of AsLeD-Net through extensive experiments, benchmarking it against potential solutions for asymmetric lens deblurring.
Cite
Text
Xiao and Wang. "Asymmetric Dual-Lens Video Deblurring." Advances in Neural Information Processing Systems, 2025.Markdown
[Xiao and Wang. "Asymmetric Dual-Lens Video Deblurring." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xiao2025neurips-asymmetric/)BibTeX
@inproceedings{xiao2025neurips-asymmetric,
title = {{Asymmetric Dual-Lens Video Deblurring}},
author = {Xiao, Zeyu and Wang, Xinchao},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/xiao2025neurips-asymmetric/}
}