Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms

Abstract

Graph neural networks (GNNs) are widely used in regression and classification problems applied to text, in areas such as sentiment analysis and medical decision-making processes. We propose a novel form for node attributes within a GNN based model that captures node-specific embeddings for every word in the vocabulary. This provides a global representation at each node, coupled with node-level updates according to associations among words in a transcript. We demonstrate the efficacy of the approach by augmenting the accuracy of measuring major depressive disorder (MDD). Prior research has sought to make a diagnostic prediction of depression levels from patient data using several modalities, including audio, video, and text. On the DAIC-WOZ benchmark, our method outperforms state-of-art methods by a substantial margin, including those using multiple modalities. Moreover, we also evaluate the performance of our novel model on a Twitter sentiment dataset. We show that our model outperforms a general GNN model by leveraging our novel 2-D node attributes. These results demonstrate the generality of the proposed method.

Cite

Text

Hong et al. "Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms." International Conference on Learning Representations, 2022.

Markdown

[Hong et al. "Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/hong2022iclr-using/)

BibTeX

@inproceedings{hong2022iclr-using,
  title     = {{Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms}},
  author    = {Hong, Simin and Cohn, Anthony and Hogg, David Crossland},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/hong2022iclr-using/}
}