Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition
Abstract
Multimodality has been recently exploited to overcome the challenges of emotion recognition. In this paper, we present a study of fusion of electroencephalogram (EEG) features and musical features extracted from musical stimuli at decision level in recognizing the time-varying binary classes of arousal and valence. Our empirical results demonstrate that EEG modality was suffered from the non-stability of EEG signals, yet fusing with music modality could alleviate the issue and enhance the performance of emotion recognition.
Cite
Text
Thammasan et al. "Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.11112Markdown
[Thammasan et al. "Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/thammasan2017aaai-multimodal/) doi:10.1609/AAAI.V31I1.11112BibTeX
@inproceedings{thammasan2017aaai-multimodal,
title = {{Multimodal Fusion of EEG and Musical Features in Music-Emotion Recognition}},
author = {Thammasan, Nattapong and Fukui, Ken-ichi and Numao, Masayuki},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {4991-4992},
doi = {10.1609/AAAI.V31I1.11112},
url = {https://mlanthology.org/aaai/2017/thammasan2017aaai-multimodal/}
}