Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision
Abstract
We present a probabilistic framework for learning pairwise similarities between objects belonging to different modalities, such as drugs and proteins, or text and images. Our framework is based on learning a binary code based representation for objects in each modality, and has the following key properties: (i) it can leverage both pairwise as well as easy-to-obtain relative preference based cross-modal constraints, (ii) the probabilistic framework naturally allows querying for the most useful/informative constraints, facilitating an active learning setting (existing methods for cross-modal similarity learning do not have such a mechanism), and (iii) the binary code length is learned from the data. We demonstrate the effectiveness of the proposed approach on two problems that require computing pairwise similarities between cross-modal object pairs: cross-modal link prediction in bipartite graphs, and hashing based cross-modal similarity search.
Cite
Text
Zhen et al. "Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9599Markdown
[Zhen et al. "Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/zhen2015aaai-cross/) doi:10.1609/AAAI.V29I1.9599BibTeX
@inproceedings{zhen2015aaai-cross,
title = {{Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision}},
author = {Zhen, Yi and Rai, Piyush and Zha, Hongyuan and Carin, Lawrence},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {3203-3209},
doi = {10.1609/AAAI.V29I1.9599},
url = {https://mlanthology.org/aaai/2015/zhen2015aaai-cross/}
}