Deep Multi-View Concept Learning

Abstract

Multi-view data is common in real-world datasets, where different views describe distinct perspectives. To better summarize the consistent and complementary information in multi-view data, researchers have proposed various multi-view representation learning algorithms, typically based on factorization models. However, most previous methods were focused on shallow factorization models which cannot capture the complex hierarchical information. Although a deep multi-view factorization model has been proposed recently, it fails to explicitly discern consistent and complementary information in multi-view data and does not consider conceptual labels. In this work we present a semi-supervised deep multi-view factorization method, named Deep Multi-view Concept Learning (DMCL). DMCL performs nonnegative factorization of the data hierarchically, and tries to capture semantic structures and explicitly model consistent and complementary information in multi-view data at the highest abstraction level. We develop a block coordinate descent algorithm for DMCL. Experiments conducted on image and document datasets show that DMCL performs well and outperforms baseline methods.

Cite

Text

Xu et al. "Deep Multi-View Concept Learning." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/402

Markdown

[Xu et al. "Deep Multi-View Concept Learning." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/xu2018ijcai-deep/) doi:10.24963/IJCAI.2018/402

BibTeX

@inproceedings{xu2018ijcai-deep,
  title     = {{Deep Multi-View Concept Learning}},
  author    = {Xu, Cai and Guan, Ziyu and Zhao, Wei and Niu, Yunfei and Wang, Quan and Wang, Zhiheng},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {2898-2904},
  doi       = {10.24963/IJCAI.2018/402},
  url       = {https://mlanthology.org/ijcai/2018/xu2018ijcai-deep/}
}