FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification

Abstract

Images captured by sheye lenses violate the pinhole camera assumption and suer from distortions. Rectication of sheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single sheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model signicantly outperforms current state-of-the-art methods. Our code and synthesized dataset will be made publicly available.

Cite

Text

Yin et al. "FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01249-6_29

Markdown

[Yin et al. "FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/yin2018eccv-fisheyerecnet/) doi:10.1007/978-3-030-01249-6_29

BibTeX

@inproceedings{yin2018eccv-fisheyerecnet,
  title     = {{FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification}},
  author    = {Yin, Xiaoqing and Wang, Xinchao and Yu, Jun and Zhang, Maojun and Fua, Pascal and Tao, Dacheng},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01249-6_29},
  url       = {https://mlanthology.org/eccv/2018/yin2018eccv-fisheyerecnet/}
}