Structuring Visual Words in 3D for Arbitrary-View Object Localization

Abstract

We propose a novel and efficient method for generic arbitrary-view object class detection and localization. In contrast to existing single-view and multi-view methods using complicated mechanisms for relating the structural information in different parts of the objects or different viewpoints, we aim at representing the structural information in their true 3D locations. Uncalibrated multi-view images from a hand-held camera are used to reconstruct the 3D visual word models in the training stage. In the testing stage, beyond bounding boxes, our method can automatically determine the locations and outlines of multiple objects in the test image with occlusion handling, and can accurately estimate both the intrinsic and extrinsic camera parameters in an optimized way. With exemplar models, our method can also handle shape deformation for intra-class variance. To handle large data sets from models, we propose several speedup techniques to make the prediction efficient. Experimental results obtained based on some standard data sets demonstrate the effectiveness of the proposed approach.

Cite

Text

Xiao et al. "Structuring Visual Words in 3D for Arbitrary-View Object Localization." European Conference on Computer Vision, 2008. doi:10.1007/978-3-540-88690-7_54

Markdown

[Xiao et al. "Structuring Visual Words in 3D for Arbitrary-View Object Localization." European Conference on Computer Vision, 2008.](https://mlanthology.org/eccv/2008/xiao2008eccv-structuring/) doi:10.1007/978-3-540-88690-7_54

BibTeX

@inproceedings{xiao2008eccv-structuring,
  title     = {{Structuring Visual Words in 3D for Arbitrary-View Object Localization}},
  author    = {Xiao, Jianxiong and Chen, Jingni and Yeung, Dit-Yan and Quan, Long},
  booktitle = {European Conference on Computer Vision},
  year      = {2008},
  pages     = {725-737},
  doi       = {10.1007/978-3-540-88690-7_54},
  url       = {https://mlanthology.org/eccv/2008/xiao2008eccv-structuring/}
}