Single View Physical Distance Estimation Using Human Pose

Abstract

We propose a fully automated system that simultaneously estimates the camera intrinsics, the ground plane, and physical distances between people from a single RGB image or video captured by a camera viewing a 3-D scene from a fixed vantage point. To automate camera calibration and distance estimation, we leverage priors about human pose and develop a novel direct formulation for pose-based auto-calibration and distance estimation, which shows state-of-the-art performance on publicly available datasets. The proposed approach enables existing camera systems to measure physical distances without needing a dedicated calibration process or range sensors, and is applicable to a broad range of use cases such as social distancing and workplace safety. Furthermore, to enable evaluation and drive research in this area, we contribute to the publicly available MEVA dataset with additional distance annotations, resulting in "MEVADA" -- an evaluation benchmark for the pose-based auto-calibration and distance estimation problem.

Cite

Text

Fei et al. "Single View Physical Distance Estimation Using Human Pose." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01218

Markdown

[Fei et al. "Single View Physical Distance Estimation Using Human Pose." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/fei2021iccv-single/) doi:10.1109/ICCV48922.2021.01218

BibTeX

@inproceedings{fei2021iccv-single,
  title     = {{Single View Physical Distance Estimation Using Human Pose}},
  author    = {Fei, Xiaohan and Wang, Henry and Cheong, Lin Lee and Zeng, Xiangyu and Wang, Meng and Tighe, Joseph},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {12406-12416},
  doi       = {10.1109/ICCV48922.2021.01218},
  url       = {https://mlanthology.org/iccv/2021/fei2021iccv-single/}
}