Egocentric Field-of-View Localization Using First-Person Point-of-View Devices
Abstract
We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions.
Cite
Text
Bettadapura et al. "Egocentric Field-of-View Localization Using First-Person Point-of-View Devices." IEEE/CVF Winter Conference on Applications of Computer Vision, 2015. doi:10.1109/WACV.2015.89Markdown
[Bettadapura et al. "Egocentric Field-of-View Localization Using First-Person Point-of-View Devices." IEEE/CVF Winter Conference on Applications of Computer Vision, 2015.](https://mlanthology.org/wacv/2015/bettadapura2015wacv-egocentric/) doi:10.1109/WACV.2015.89BibTeX
@inproceedings{bettadapura2015wacv-egocentric,
title = {{Egocentric Field-of-View Localization Using First-Person Point-of-View Devices}},
author = {Bettadapura, Vinay and Essa, Irfan A. and Pantofaru, Caroline},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2015},
pages = {626-633},
doi = {10.1109/WACV.2015.89},
url = {https://mlanthology.org/wacv/2015/bettadapura2015wacv-egocentric/}
}