Multimodal Partial Estimates Fusion

Abstract

Fusing partial estimates is a critical and common problem in many computer vision tasks such as part-based detection and tracking. It generally becomes complicated and intractable when there are a large number of multimodal partial estimates, and thus it is desirable to find an effective and scalable fusion method to integrate these partial estimates. This paper presents a novel and effective approach to fusing multimodal partial estimates in a principled way. In this new approach, fusion is related to a computational geometry problem of finding the minimum-volume orthotope, and an effective and scalable branch and bound search algorithm is designed to obtain the global optimal solution. Experiments on tracking articulated objects and occluded objects show the effectiveness of the proposed approach.

Cite

Text

Xu et al. "Multimodal Partial Estimates Fusion." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459475

Markdown

[Xu et al. "Multimodal Partial Estimates Fusion." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/xu2009iccv-multimodal/) doi:10.1109/ICCV.2009.5459475

BibTeX

@inproceedings{xu2009iccv-multimodal,
  title     = {{Multimodal Partial Estimates Fusion}},
  author    = {Xu, Jiang and Yuan, Junsong and Wu, Ying},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2009},
  pages     = {2177-2184},
  doi       = {10.1109/ICCV.2009.5459475},
  url       = {https://mlanthology.org/iccv/2009/xu2009iccv-multimodal/}
}