Compensating for Motion During Direct-Global Separation

Abstract

Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to be performed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.

Cite

Text

Achar et al. "Compensating for Motion During Direct-Global Separation." International Conference on Computer Vision, 2013. doi:10.1109/ICCV.2013.187

Markdown

[Achar et al. "Compensating for Motion During Direct-Global Separation." International Conference on Computer Vision, 2013.](https://mlanthology.org/iccv/2013/achar2013iccv-compensating/) doi:10.1109/ICCV.2013.187

BibTeX

@inproceedings{achar2013iccv-compensating,
  title     = {{Compensating for Motion During Direct-Global Separation}},
  author    = {Achar, Supreeth and Nuske, Stephen T. and Narasimhan, Srinivasa G.},
  booktitle = {International Conference on Computer Vision},
  year      = {2013},
  doi       = {10.1109/ICCV.2013.187},
  url       = {https://mlanthology.org/iccv/2013/achar2013iccv-compensating/}
}