Grouping Crowd-Sourced Mobile Videos for Cross-Camera Tracking
Abstract
Public adoption of camera-equipped mobile phones has given the average observer of an event the ability to capture their perspective and upload the video for online viewing (e.g. YouTube). When traditional wide-area surveillance systems fail to capture an area or time of interest, crowd-sourced videos can provide the information needed for event reconstruction. This paper presents the first end-to-end method for automatic cross-camera tracking from crowd-sourced mobile video data. Our processing (1) sorts videos into overlapping space-time groups, (2) finds the inter-camera relationships from objects within each view, and (3) provides an end user with multiple stabilized views of tracked objects. We demonstrate the system's effectiveness on a real dataset collected from YouTube.
Cite
Text
Frey and Antone. "Grouping Crowd-Sourced Mobile Videos for Cross-Camera Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2013. doi:10.1109/CVPRW.2013.120Markdown
[Frey and Antone. "Grouping Crowd-Sourced Mobile Videos for Cross-Camera Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2013.](https://mlanthology.org/cvprw/2013/frey2013cvprw-grouping/) doi:10.1109/CVPRW.2013.120BibTeX
@inproceedings{frey2013cvprw-grouping,
title = {{Grouping Crowd-Sourced Mobile Videos for Cross-Camera Tracking}},
author = {Frey, Nathan and Antone, Matthew},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2013},
pages = {800-807},
doi = {10.1109/CVPRW.2013.120},
url = {https://mlanthology.org/cvprw/2013/frey2013cvprw-grouping/}
}