Multi-View Multi-Label Learning with View-Specific Information Extraction
Abstract
Multi-view multi-label learning serves an important framework to learn from objects with diverse representations and rich semantics. Existing multi-view multi-label learning techniques focus on exploiting shared subspace for fusing multi-view representations, where helpful view-specific information for discriminative modeling is usually ignored. In this paper, a novel multi-view multi-label learning approach named SIMM is proposed which leverages shared subspace exploitation and view-specific information extraction. For shared subspace exploitation, SIMM jointly minimizes confusion adversarial loss and multi-label loss to utilize shared information from all views. For view-specific information extraction, SIMM enforces an orthogonal constraint w.r.t. the shared subspace to utilize view-specific discriminative information. Extensive experiments on real-world data sets clearly show the favorable performance of SIMM against other state-of-the-art multi-view multi-label learning approaches.
Cite
Text
Wu et al. "Multi-View Multi-Label Learning with View-Specific Information Extraction." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/539Markdown
[Wu et al. "Multi-View Multi-Label Learning with View-Specific Information Extraction." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/wu2019ijcai-multi/) doi:10.24963/IJCAI.2019/539BibTeX
@inproceedings{wu2019ijcai-multi,
title = {{Multi-View Multi-Label Learning with View-Specific Information Extraction}},
author = {Wu, Xuan and Chen, Qing-Guo and Hu, Yao and Wang, Dengbao and Chang, Xiaodong and Wang, Xiaobo and Zhang, Min-Ling},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3884-3890},
doi = {10.24963/IJCAI.2019/539},
url = {https://mlanthology.org/ijcai/2019/wu2019ijcai-multi/}
}