Multi-Source Domain Adaptation for Visual Sentiment Classification
Abstract
Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
Cite
Text
Lin et al. "Multi-Source Domain Adaptation for Visual Sentiment Classification." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I03.5651Markdown
[Lin et al. "Multi-Source Domain Adaptation for Visual Sentiment Classification." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/lin2020aaai-multi/) doi:10.1609/AAAI.V34I03.5651BibTeX
@inproceedings{lin2020aaai-multi,
title = {{Multi-Source Domain Adaptation for Visual Sentiment Classification}},
author = {Lin, Chuang and Zhao, Sicheng and Meng, Lei and Chua, Tat-Seng},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {2661-2668},
doi = {10.1609/AAAI.V34I03.5651},
url = {https://mlanthology.org/aaai/2020/lin2020aaai-multi/}
}