Learning Conceptual-Contextual Embeddings for Medical Text

Abstract

External knowledge is often useful for natural language understanding tasks. We introduce a contextual text representation model called Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge into text representations. Unlike entity embedding methods, our approach encodes a knowledge graph into a context model. CC embeddings can be easily reused for a wide range of tasks in a similar fashion to pre-trained language models. Our model effectively encodes the huge UMLS database by leveraging semantic generalizability. Experiments on electronic health records (EHRs) and medical text processing benchmarks showed our model gives a major boost to the performance of supervised medical NLP tasks.

Cite

Text

Zhang et al. "Learning Conceptual-Contextual Embeddings for Medical Text." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6504

Markdown

[Zhang et al. "Learning Conceptual-Contextual Embeddings for Medical Text." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/zhang2020aaai-learning-a/) doi:10.1609/AAAI.V34I05.6504

BibTeX

@inproceedings{zhang2020aaai-learning-a,
  title     = {{Learning Conceptual-Contextual Embeddings for Medical Text}},
  author    = {Zhang, Xiao and Dou, Dejing and Wu, Ji},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {9579-9586},
  doi       = {10.1609/AAAI.V34I05.6504},
  url       = {https://mlanthology.org/aaai/2020/zhang2020aaai-learning-a/}
}