On the Computational Benefit of Multimodal Learning
Abstract
Human perception inherently operates in a multimodal manner. Similarly, as machines interpret the empirical world, their learning processes ought to be multimodal. The recent, remarkable successes in empirical multimodal learning underscore the significance of understanding this paradigm. Yet, a solid theoretical foundation for multimodal learning has eluded the field for some time. While a recent study by \cite{zhoul} has shown the superior sample complexity of multimodal learning compared to its unimodal counterpart, another basic question remains: does multimodal learning also offer computational advantages over unimodal learning? This work initiates a study on the computational benefit of multimodal learning. We demonstrate that, under certain conditions, multimodal learning can outpace unimodal learning exponentially in terms of computation. Specifically, we present a learning task that is NP-hard for unimodal learning but is solvable in polynomial time by a multimodal algorithm. Our construction is based on a novel modification to the intersection of two half-spaces problem.
Cite
Text
Lu. "On the Computational Benefit of Multimodal Learning." Proceedings of The 35th International Conference on Algorithmic Learning Theory, 2024.Markdown
[Lu. "On the Computational Benefit of Multimodal Learning." Proceedings of The 35th International Conference on Algorithmic Learning Theory, 2024.](https://mlanthology.org/alt/2024/lu2024alt-computational/)BibTeX
@inproceedings{lu2024alt-computational,
title = {{On the Computational Benefit of Multimodal Learning}},
author = {Lu, Zhou},
booktitle = {Proceedings of The 35th International Conference on Algorithmic Learning Theory},
year = {2024},
pages = {810-821},
volume = {237},
url = {https://mlanthology.org/alt/2024/lu2024alt-computational/}
}