Integration of Top-Down and Bottom-up Information for Image Labeling

Abstract

This paper proposes a novel framework that integrates bottom-up information and top-down information for scene understanding. Bottom-up information is derived from local features of texture and color. Top-down information is generated from a holistic image context. The information is integrated effectively by extension of the Ising model, which is a simple model of ferromagnetism. Locally and globally consistent image recognition is achieved through an iterative process. The proposed method showed 91.8% accuracy in road-image labeling, which is superior to results obtained using only bottom-up information (81.9%) and the best accuracy obtained using the other method (90.7%).

Cite

Text

Toyoda et al. "Integration of Top-Down and Bottom-up Information for Image Labeling." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.156

Markdown

[Toyoda et al. "Integration of Top-Down and Bottom-up Information for Image Labeling." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/toyoda2006cvpr-integration/) doi:10.1109/CVPR.2006.156

BibTeX

@inproceedings{toyoda2006cvpr-integration,
  title     = {{Integration of Top-Down and Bottom-up Information for Image Labeling}},
  author    = {Toyoda, Takahiro and Tagami, Keisuke and Hasegawa, Osamu},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2006},
  pages     = {1106-1113},
  doi       = {10.1109/CVPR.2006.156},
  url       = {https://mlanthology.org/cvpr/2006/toyoda2006cvpr-integration/}
}