Viewpoint-Coded Structured Light

Abstract

We introduce a theoretical framework and practical algorithms for replacing time-coded structured light patterns with viewpoint codes, in the form of additional camera locations. Current structured light methods typically use log(N) light patterns, encoded over time, to unambiguously reconstruct N unique depths. We demonstrate that each additional camera location may replace one frame in a temporal binary code. Our theoretical viewpoint coding analysis shows that, by using a high frequency stripe pattern and placing cameras in carefully selected locations, the epipolar projection in each camera can be made to mimic the binary encoding patterns normally projected over time. Results from our practical implementation demonstrate reliable depth reconstruction that makes neither temporal nor spatial continuity assumptions about the scene being captured.

Cite

Text

Young et al. "Viewpoint-Coded Structured Light." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007. doi:10.1109/CVPR.2007.383292

Markdown

[Young et al. "Viewpoint-Coded Structured Light." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2007.](https://mlanthology.org/cvpr/2007/young2007cvpr-viewpoint/) doi:10.1109/CVPR.2007.383292

BibTeX

@inproceedings{young2007cvpr-viewpoint,
  title     = {{Viewpoint-Coded Structured Light}},
  author    = {Young, Mark and Beeson, Erik and Davis, James and Rusinkiewicz, Szymon and Ramamoorthi, Ravi},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2007},
  doi       = {10.1109/CVPR.2007.383292},
  url       = {https://mlanthology.org/cvpr/2007/young2007cvpr-viewpoint/}
}