Heightfields for Efficient Scene Reconstruction for AR
Abstract
3D scene reconstruction from a sequence of posed RGB images is a cornerstone task for computer vision and augmented reality (AR). While depth-based fusion is the foundation of most real-time approaches for 3D reconstruction, recent learning based methods that operate directly on RGB images can achieve higher quality reconstructions, but at the cost of increased runtime and memory requirements, making them unsuitable for AR applications. We propose an efficient learning-based method that refines the 3D reconstruction obtained by a traditional fusion approach. By leveraging a top-down heightfield representation, our method remains real-time while approaching the quality of other learning-based methods. Despite being a simplification, our heightfield is perfectly appropriate for robotic path planning or augmented reality character placement. We outline several innovations that push the performance beyond existing top-down prediction baselines, and we present an evaluation framework on the challenging ScanNetV2 dataset, targeting AR tasks. Ultimately, we show that our method improves over the baselines for AR applications. Full code and pretrained models will be released on acceptance.
Cite
Text
Watson et al. "Heightfields for Efficient Scene Reconstruction for AR." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Watson et al. "Heightfields for Efficient Scene Reconstruction for AR." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/watson2023wacv-heightfields/)BibTeX
@inproceedings{watson2023wacv-heightfields,
title = {{Heightfields for Efficient Scene Reconstruction for AR}},
author = {Watson, Jamie and Vicente, Sara and Aodha, Oisin Mac and Godard, Clément and Brostow, Gabriel and Firman, Michael},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {5850-5860},
url = {https://mlanthology.org/wacv/2023/watson2023wacv-heightfields/}
}