EvDiG: Event-Guided Direct and Global Components Separation

Abstract

Separating the direct and global components of a scene aids in shape recovery and basic material understanding. Conventional methods capture multiple frames under high frequency illumination patterns or shadows requiring the scene to keep stationary during the image acquisition process. Single-frame methods simplify the capture procedure but yield lower-quality separation results. In this paper we leverage the event camera to facilitate the separation of direct and global components enabling video-rate separation of high quality. In detail we adopt an event camera to record rapid illumination changes caused by the shadow of a line occluder sweeping over the scene and reconstruct the coarse separation results through event accumulation. We then design a network to resolve the noise in the coarse separation results and restore color information. A real-world dataset is collected using a hybrid camera system for network training and evaluation. Experimental results show superior performance over state-of-the-art methods.

Cite

Text

Zhou et al. "EvDiG: Event-Guided Direct and Global Components Separation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00918

Markdown

[Zhou et al. "EvDiG: Event-Guided Direct and Global Components Separation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/zhou2024cvpr-evdig/) doi:10.1109/CVPR52733.2024.00918

BibTeX

@inproceedings{zhou2024cvpr-evdig,
  title     = {{EvDiG: Event-Guided Direct and Global Components Separation}},
  author    = {Zhou, Xinyu and Duan, Peiqi and Li, Boyu and Zhou, Chu and Xu, Chao and Shi, Boxin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {9612-9621},
  doi       = {10.1109/CVPR52733.2024.00918},
  url       = {https://mlanthology.org/cvpr/2024/zhou2024cvpr-evdig/}
}