A Bayesian Approach to Multimodal Visual Dictionary Learning
Abstract
nary learning methods rely on image descriptors alone or together with class labels. However, Web images are often associated with text data which may carry substantial information regarding image semantics, and may be exploited for visual dictionary learning. This paper explores this idea by leveraging relational information between image descriptors and textual words via co-clustering, in addition to information of image descriptors. Existing co-clustering methods are not optimal for this problem because they ignore the structure of image descriptors in the continuous space, which is crucial for capturing visual characteristics of images. We propose a novel Bayesian co-clustering model to jointly estimate the underlying distributions of the continuous image descriptors as well as the relationship between such distributions and the textual words through a unified Bayesian inference. Extensive experiments on image categorization and retrieval have validated the substantial value of the proposed joint modeling in improving visual dictionary learning, where our model shows superior performance over several recent methods.
Cite
Text
Irie et al. "A Bayesian Approach to Multimodal Visual Dictionary Learning." Conference on Computer Vision and Pattern Recognition, 2013. doi:10.1109/CVPR.2013.49Markdown
[Irie et al. "A Bayesian Approach to Multimodal Visual Dictionary Learning." Conference on Computer Vision and Pattern Recognition, 2013.](https://mlanthology.org/cvpr/2013/irie2013cvpr-bayesian/) doi:10.1109/CVPR.2013.49BibTeX
@inproceedings{irie2013cvpr-bayesian,
title = {{A Bayesian Approach to Multimodal Visual Dictionary Learning}},
author = {Irie, Go and Liu, Dong and Li, Zhenguo and Chang, Shih-Fu},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2013},
doi = {10.1109/CVPR.2013.49},
url = {https://mlanthology.org/cvpr/2013/irie2013cvpr-bayesian/}
}