The Multi-Task Learning View of Multimodal Data
Abstract
We study the problem of learning from multiple views using kernel methods in a supervised setting. We approach this problem from a multi-task learning point of view and illustrate how to capture the interesting multimodal structure of the data using multi-task kernels. Our analysis shows that the multi-task perspective offers the flexibility to design more efficient multiple-source learning algorithms, and hence the ability to exploit multiple descriptions of the data. In particular, we formulate the multimodal learning framework using vector-valued reproducing kernel Hilbert spaces, and we derive specific multi-task kernels that can operate over multiple modalities. Finally, we analyze the vector-valued regularized least squares algorithm in this context, and demonstrate its potential in a series of experiments with a real-world multimodal data set.
Cite
Text
Kadri et al. "The Multi-Task Learning View of Multimodal Data." Proceedings of the 5th Asian Conference on Machine Learning, 2013.Markdown
[Kadri et al. "The Multi-Task Learning View of Multimodal Data." Proceedings of the 5th Asian Conference on Machine Learning, 2013.](https://mlanthology.org/acml/2013/kadri2013acml-multitask/)BibTeX
@inproceedings{kadri2013acml-multitask,
title = {{The Multi-Task Learning View of Multimodal Data}},
author = {Kadri, Hachem and Ayache, Stephane and Capponi, Cécile and Koço, Sokol and Dupé, François-Xavier and Morvant, Emilie},
booktitle = {Proceedings of the 5th Asian Conference on Machine Learning},
year = {2013},
pages = {261-276},
volume = {29},
url = {https://mlanthology.org/acml/2013/kadri2013acml-multitask/}
}