An Adaptive Learning Method for Target Tracking Across Multiple Cameras
Abstract
This paper proposes an adaptive learning method for tracking targets across multiple cameras with disjoint views. Two visual cues are usually employed for tracking targets across cameras: spatio-temporal cue and appearance cue. To learn the relationships among cameras, traditional methods used batch-learning procedures or hand-labeled correspondence, which can work well only within a short period of time. In this paper, we propose an unsupervised method which learns both spatio-temporal relationships and appearance relationships adaptively and can be applied to long-term monitoring. Our method performs target tracking across multiple cameras while also considering the environment changes, such as sudden lighting changes. Also, we improve the estimation of spatio-temporal relationships by using the prior knowledge of camera network topology.
Cite
Text
Chen et al. "An Adaptive Learning Method for Target Tracking Across Multiple Cameras." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008. doi:10.1109/CVPR.2008.4587505Markdown
[Chen et al. "An Adaptive Learning Method for Target Tracking Across Multiple Cameras." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008.](https://mlanthology.org/cvpr/2008/chen2008cvpr-adaptive/) doi:10.1109/CVPR.2008.4587505BibTeX
@inproceedings{chen2008cvpr-adaptive,
title = {{An Adaptive Learning Method for Target Tracking Across Multiple Cameras}},
author = {Chen, Kuan-Wen and Lai, Chih-Chuan and Hung, Yi-Ping and Chen, Chu-Song},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2008},
doi = {10.1109/CVPR.2008.4587505},
url = {https://mlanthology.org/cvpr/2008/chen2008cvpr-adaptive/}
}