A Graphbased Framework for Multi-Task Multi-View Learning
Abstract
Many real-world problems exhibit dual-heterogeneity. A single learning task might have features in multiple views (i.e., feature heterogeneity); multiple learning tasks might be related with each other through one or more shared views (i.e., task heterogeneity). Existing multi-task learning or multi-view learning algorithms only capture one type of heterogeneity. In this paper, we introduce Multi-Task Multi-View (M^2TV) learning for such complicated learning problems with both feature heterogeneity and task heterogeneity. We propose a graph-based framework (GraM^2) to take full advantage of the dual-heterogeneous nature. Our framework has a natural connection to Reproducing Kernel Hilbert Space (RKHS). Furthermore, we propose an iterative algorithm (IteM^2) for GraM^2 framework, and analyze its optimality, convergence and time complexity. Experimental results on various real data sets demonstrate its effectiveness.
Cite
Text
He and Lawrence. "A Graphbased Framework for Multi-Task Multi-View Learning." International Conference on Machine Learning, 2011.Markdown
[He and Lawrence. "A Graphbased Framework for Multi-Task Multi-View Learning." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/he2011icml-graphbased/)BibTeX
@inproceedings{he2011icml-graphbased,
title = {{A Graphbased Framework for Multi-Task Multi-View Learning}},
author = {He, Jingrui and Lawrence, Rick},
booktitle = {International Conference on Machine Learning},
year = {2011},
pages = {25-32},
url = {https://mlanthology.org/icml/2011/he2011icml-graphbased/}
}