Graphical Model-Based Learning in High Dimensional Feature Spaces
Abstract
Digital media tend to combine text and images to express richer information, especially on image hosting and online shopping websites. This trend presents a challenge in understanding the contents from different forms of information. Features representing visual information are usually sparse in high dimensional space, which makes the learning process intractable. In order to understand text and its related visual information, we present a new graphical model-based approach to discover more meaningful information in rich media. We extend the standard Latent Dirichlet Allocation (LDA) framework to learn in high dimensional feature spaces.
Cite
Text
Song and Zhu. "Graphical Model-Based Learning in High Dimensional Feature Spaces." AAAI Conference on Artificial Intelligence, 2013. doi:10.1609/AAAI.V27I1.8533Markdown
[Song and Zhu. "Graphical Model-Based Learning in High Dimensional Feature Spaces." AAAI Conference on Artificial Intelligence, 2013.](https://mlanthology.org/aaai/2013/song2013aaai-graphical/) doi:10.1609/AAAI.V27I1.8533BibTeX
@inproceedings{song2013aaai-graphical,
title = {{Graphical Model-Based Learning in High Dimensional Feature Spaces}},
author = {Song, Zhao and Zhu, Yuke},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2013},
pages = {1641-1642},
doi = {10.1609/AAAI.V27I1.8533},
url = {https://mlanthology.org/aaai/2013/song2013aaai-graphical/}
}