One-Pass Multi-View Learning
Abstract
Multi-view learning has been an important learning paradigm where data come from multiple channels or appear in multiple modalities. Many approaches have been developed in this field, and have achieved better performance than single-view ones. Those approaches, however, always work on small-size datasets with low dimensionality, owing to their high computational cost. In recent years, it has been witnessed that many applications involve large-scale multi-view data, e.g., hundreds of hours of video (including visual, audio and text views) is uploaded to YouTube every minute, bringing a big challenge to previous multi-view algorithms. This work concentrates on the large-scale multi-view learning for classification and proposes the One-Pass Multi-View (OPMV) framework which goes through the training data only once without storing the entire training examples. This approach jointly optimizes the composite objective functions with consistency linear constraints for different views. We verify, both theoretically and empirically, the effectiveness of the proposed algorithm.
Cite
Text
Zhu et al. "One-Pass Multi-View Learning." Proceedings of The 7th Asian Conference on Machine Learning, 2015.Markdown
[Zhu et al. "One-Pass Multi-View Learning." Proceedings of The 7th Asian Conference on Machine Learning, 2015.](https://mlanthology.org/acml/2015/zhu2015acml-onepass/)BibTeX
@inproceedings{zhu2015acml-onepass,
title = {{One-Pass Multi-View Learning}},
author = {Zhu, Yue and Gao, Wei and Zhou, Zhi-Hua},
booktitle = {Proceedings of The 7th Asian Conference on Machine Learning},
year = {2015},
pages = {407-422},
volume = {45},
url = {https://mlanthology.org/acml/2015/zhu2015acml-onepass/}
}