Auxiliary Information Regularized Machine for Multiple Modality Feature Learning

Abstract

In real world applications, data are often with multiple modalities. Previous works assumed that each modality contains sufficient information for target and can be treated with equal importance. However, it is often that different modalities are of various importance in real tasks, e.g., the facial feature is weak modality and the fingerprint feature is strong modality in ID recognition. In this paper, we point out that different modalities should be treated with different strategies and propose the Auxiliary information Regularized Machine (ARM), which works by extracting the most discriminative feature subspace of weak modality while regularizing the strong modal predictor. Experiments on binary and multi-class datasets demonstrate the advantages of our proposed approach ARM.

Cite

Text

Yang et al. "Auxiliary Information Regularized Machine for Multiple Modality Feature Learning." International Joint Conference on Artificial Intelligence, 2015.

Markdown

[Yang et al. "Auxiliary Information Regularized Machine for Multiple Modality Feature Learning." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/yang2015ijcai-auxiliary/)

BibTeX

@inproceedings{yang2015ijcai-auxiliary,
  title     = {{Auxiliary Information Regularized Machine for Multiple Modality Feature Learning}},
  author    = {Yang, Yang and Ye, Han-Jia and Zhan, De-Chuan and Jiang, Yuan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {1033-1039},
  url       = {https://mlanthology.org/ijcai/2015/yang2015ijcai-auxiliary/}
}