Tracking Across Multiple Cameras with Disjoint Views

Abstract

Conventional tracking approaches assume proximity in space, time and appearance of objects in successive observations. However, observations of objects are often widely separated in time and space when viewed from multiple non-overlapping cameras. To address this problem, we present a novel approach for establishing object correspondence across non-overlapping cameras. Our multi-camera tracking algorithm exploits the redundance in paths that people and cars tend to follow, e.g. roads, walk-ways or corridors, by using motion trends and appearance of objects, to establish correspondence. Our system does not require any inter-camera calibration, instead the system learns the camera topology and path probabilities of objects using Parzen windows, during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework. The learned parameters are updated with changing trajectory patterns. Experiments with real world videos are reported, which validate the proposed approach. 1.

Cite

Text

Javed et al. "Tracking Across Multiple Cameras with Disjoint Views." IEEE/CVF International Conference on Computer Vision, 2003. doi:10.1109/ICCV.2003.1238451

Markdown

[Javed et al. "Tracking Across Multiple Cameras with Disjoint Views." IEEE/CVF International Conference on Computer Vision, 2003.](https://mlanthology.org/iccv/2003/javed2003iccv-tracking/) doi:10.1109/ICCV.2003.1238451

BibTeX

@inproceedings{javed2003iccv-tracking,
  title     = {{Tracking Across Multiple Cameras with Disjoint Views}},
  author    = {Javed, Omar and Rasheed, Zeeshan and Shafique, Khurram and Shah, Mubarak},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2003},
  pages     = {952-957},
  doi       = {10.1109/ICCV.2003.1238451},
  url       = {https://mlanthology.org/iccv/2003/javed2003iccv-tracking/}
}