On Uni-Modal Feature Learning in Supervised Multi-Modal Learning

Abstract

We abstract the features (i.e. learned representations) of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interactions. Multi-modal models are expected to benefit from cross-modal interactions on the basis of ensuring uni-modal feature learning. However, recent supervised multi-modal late-fusion training approaches still suffer from insufficient learning of uni-modal features on each modality. We prove that this phenomenon does hurt the model’s generalization ability. To this end, we propose to choose a targeted late-fusion learning method for the given supervised multi-modal task from Uni-Modal Ensemble (UME) and the proposed Uni-Modal Teacher (UMT), according to the distribution of uni-modal and paired features. We demonstrate that, under a simple guiding strategy, we can achieve comparable results to other complex late-fusion or intermediate-fusion methods on various multi-modal datasets, including VGG-Sound, Kinetics-400, UCF101, and ModelNet40.

Cite

Text

Du et al. "On Uni-Modal Feature Learning in Supervised Multi-Modal Learning." International Conference on Machine Learning, 2023.

Markdown

[Du et al. "On Uni-Modal Feature Learning in Supervised Multi-Modal Learning." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/du2023icml-unimodal/)

BibTeX

@inproceedings{du2023icml-unimodal,
  title     = {{On Uni-Modal Feature Learning in Supervised Multi-Modal Learning}},
  author    = {Du, Chenzhuang and Teng, Jiaye and Li, Tingle and Liu, Yichen and Yuan, Tianyuan and Wang, Yue and Yuan, Yang and Zhao, Hang},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {8632-8656},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/du2023icml-unimodal/}
}