Co-Regularized PLSA for Multi-Modal Learning

Abstract

Many learning problems in real world applications involve rich datasets comprising multiple information modalities. In this work, we study co-regularized PLSA(coPLSA) as an efficient solution to probabilistic topic analysis of multi-modal data. In coPLSA, similarities between topic compositions of a data entity across different data modalities are measured with divergences between discrete probabilities, which are incorporated as a co-regularizer to augment individual PLSA models over each data modality. We derive efficient iterative learning algorithms for coPLSA with symmetric KL, L2 and L1 divergences as co-regularizers, in each case the essential optimization problem affords simple numerical solutions that entail only matrix arithmetic operations and numerical solution of 1D nonlinear equations. We evaluate the performance of the coPLSA algorithms on text/image cross-modal retrieval tasks, on which they show competitive performance with state-of-the-art methods.

Cite

Text

Wang et al. "Co-Regularized PLSA for Multi-Modal Learning." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10204

Markdown

[Wang et al. "Co-Regularized PLSA for Multi-Modal Learning." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/wang2016aaai-co/) doi:10.1609/AAAI.V30I1.10204

BibTeX

@inproceedings{wang2016aaai-co,
  title     = {{Co-Regularized PLSA for Multi-Modal Learning}},
  author    = {Wang, Xin and Chang, Ming-Ching and Ying, Yiming and Lyu, Siwei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {2166-2172},
  doi       = {10.1609/AAAI.V30I1.10204},
  url       = {https://mlanthology.org/aaai/2016/wang2016aaai-co/}
}