Continuous Global Evidence-Based Bayesian Modality Fusion for Simultaneous Tracking of Multiple Objects

Abstract

Robust, real-time tracking of objects from visual data requires probabilistic fusion of multiple visual cues. Previous approaches have either been ad hoc or relied on a Bayesian network with discrete spatial variables which suffers from discretisation and computational complexity problems. We present a new Bayesian modality fusion network that uses continuous domain variables. The network architecture distinguishes between cues that are necessary or unnecessary for the object's presence. Computationally expensive and inexpensive modalities are also handled differently to minimise cost. The method provides a formal, tractable and robust probabilistic method for simultaneously tracking multiple objects. While instantaneous inference is exact, approximation is required for propagation over time.

Cite

Text

Sherrah and Gong. "Continuous Global Evidence-Based Bayesian Modality Fusion for Simultaneous Tracking of Multiple Objects." IEEE/CVF International Conference on Computer Vision, 2001. doi:10.1109/ICCV.2001.937596

Markdown

[Sherrah and Gong. "Continuous Global Evidence-Based Bayesian Modality Fusion for Simultaneous Tracking of Multiple Objects." IEEE/CVF International Conference on Computer Vision, 2001.](https://mlanthology.org/iccv/2001/sherrah2001iccv-continuous/) doi:10.1109/ICCV.2001.937596

BibTeX

@inproceedings{sherrah2001iccv-continuous,
  title     = {{Continuous Global Evidence-Based Bayesian Modality Fusion for Simultaneous Tracking of Multiple Objects}},
  author    = {Sherrah, Jamie and Gong, Shaogang},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2001},
  pages     = {42-49},
  doi       = {10.1109/ICCV.2001.937596},
  url       = {https://mlanthology.org/iccv/2001/sherrah2001iccv-continuous/}
}