Generating Sharp Panoramas from Motion-Blurred Videos
Abstract
In this paper, we show how to generate a sharp panorama from a set of motion-blurred video frames. Our technique is based on joint global motion estimation and multi-frame deblurring. It also automatically computes the duty cycle of the video, namely the percentage of time between frames that is actually exposure time. The duty cycle is necessary for allowing the blur kernels to be accurately extracted and then removed. We demonstrate our technique on a number of videos.
Cite
Text
Li et al. "Generating Sharp Panoramas from Motion-Blurred Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539938Markdown
[Li et al. "Generating Sharp Panoramas from Motion-Blurred Videos." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/li2010cvpr-generating/) doi:10.1109/CVPR.2010.5539938BibTeX
@inproceedings{li2010cvpr-generating,
title = {{Generating Sharp Panoramas from Motion-Blurred Videos}},
author = {Li, Yunpeng and Kang, Sing Bing and Joshi, Neel and Seitz, Steven M. and Huttenlocher, Daniel P.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2010},
pages = {2424-2431},
doi = {10.1109/CVPR.2010.5539938},
url = {https://mlanthology.org/cvpr/2010/li2010cvpr-generating/}
}