HDR Environment mAP Estimation for Real-Time Augmented Reality
Abstract
We present a method to estimate an HDR environment map from a narrow field-of-view LDR camera image in real-time. This enables perceptually appealing reflections and shading on virtual objects of any material finish, from mirror to diffuse, rendered into a real environment using augmented reality. Our method is based on our efficient convolutional neural network, EnvMapNet, trained end-to-end with two novel losses, ProjectionLoss for the generated image, and ClusterLoss for adversarial training. Through qualitative and quantitative comparison to state-of-the-art methods, we demonstrate that our algorithm reduces the directional error of estimated light sources by more than 50%, and achieves 3.7 times lower Frechet Inception Distance (FID). We further showcase a mobile application that is able to run our neural network model in under 9ms on an iPhone XS, and render in real-time, visually coherent virtual objects in previously unseen real-world environments.
Cite
Text
Somanath and Kurz. "HDR Environment mAP Estimation for Real-Time Augmented Reality." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01114Markdown
[Somanath and Kurz. "HDR Environment mAP Estimation for Real-Time Augmented Reality." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/somanath2021cvpr-hdr/) doi:10.1109/CVPR46437.2021.01114BibTeX
@inproceedings{somanath2021cvpr-hdr,
title = {{HDR Environment mAP Estimation for Real-Time Augmented Reality}},
author = {Somanath, Gowri and Kurz, Daniel},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {11298-11306},
doi = {10.1109/CVPR46437.2021.01114},
url = {https://mlanthology.org/cvpr/2021/somanath2021cvpr-hdr/}
}