Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset
Abstract
Scene motion, multiple reflections, and sensor noise introduce artifacts in the depth reconstruction performed by time-of-flight cameras. We propose a two-stage, deep-learning approach to address all of these sources of artifacts simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and allows to simulate different camera hardware. Using the Kinect 2 camera as a baseline, we show improved reconstruction errors over state-of-the-art methods, on both simulated and real data.
Cite
Text
Guo et al. "Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01246-5_23Markdown
[Guo et al. "Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/guo2018eccv-tackling/) doi:10.1007/978-3-030-01246-5_23BibTeX
@inproceedings{guo2018eccv-tackling,
title = {{Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset}},
author = {Guo, Qi and Frosio, Iuri and Gallo, Orazio and Zickler, Todd and Kautz, Jan},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01246-5_23},
url = {https://mlanthology.org/eccv/2018/guo2018eccv-tackling/}
}