Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion
Abstract
We present our preliminary work to determine if patient's vocal acoustic, linguistic, and facial patterns could predict clinical ratings of depression severity, namely Patient Health Questionnaire depression scale (PHQ-8). We proposed a multi-modal fusion model that combines three different modalities: audio, video, and text features. By training over the AVEC2017 dataset, our proposed model outperforms each single-modality prediction model, and surpasses the dataset baseline with a nice margin.
Cite
Text
Samareh et al. "Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12152Markdown
[Samareh et al. "Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/samareh2018aaai-predicting/) doi:10.1609/AAAI.V32I1.12152BibTeX
@inproceedings{samareh2018aaai-predicting,
title = {{Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion}},
author = {Samareh, Aven and Jin, Yan and Wang, Zhangyang and Chang, Xiangyu and Huang, Shuai},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8147-8148},
doi = {10.1609/AAAI.V32I1.12152},
url = {https://mlanthology.org/aaai/2018/samareh2018aaai-predicting/}
}