End-to-End United Video Dehazing and Detection
Abstract
The recent development of CNN-based image dehazing has revealed the effectiveness of end-to-end modeling. However, extending the idea to end-to-end video dehazing has not been explored yet. In this paper, we propose an End-to-End Video Dehazing Network (EVD-Net), to exploit the temporal consistency between consecutive video frames. A thorough study has been conducted over a number of structure options, to identify the best temporal fusion strategy. Furthermore, we build an End-to-End United Video Dehazing and Detection Network (EVDD-Net), which concatenates and jointly trains EVD-Net with a video object detection model. The resulting augmented end-to-end pipeline has demonstrated much more stable and accurate detection results in hazy video.
Cite
Text
Li et al. "End-to-End United Video Dehazing and Detection." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12287Markdown
[Li et al. "End-to-End United Video Dehazing and Detection." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/li2018aaai-end/) doi:10.1609/AAAI.V32I1.12287BibTeX
@inproceedings{li2018aaai-end,
title = {{End-to-End United Video Dehazing and Detection}},
author = {Li, Boyi and Peng, Xiulian and Wang, Zhangyang and Xu, Jizheng and Feng, Dan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {7016-7023},
doi = {10.1609/AAAI.V32I1.12287},
url = {https://mlanthology.org/aaai/2018/li2018aaai-end/}
}