A Word Embedding and a Josa Vector for Korean Unsupervised Semantic Role Induction
Abstract
We propose an unsupervised semantic role labeling method for Korean language, one of the agglutinative languages which have complicated suffix structures telling much of syntactic. First, we construct an argument embedding and then develop a indicator vector of the suffix such as a Josa. And, we construct an argument tuple by concatenating above two vectors. The role induction is performed by clustering the argument tuples. These method which achieves up to a 70.16% of F1-score and 75.85% of accuracy.
Cite
Text
Nam and Kim. "A Word Embedding and a Josa Vector for Korean Unsupervised Semantic Role Induction." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.9923Markdown
[Nam and Kim. "A Word Embedding and a Josa Vector for Korean Unsupervised Semantic Role Induction." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/nam2016aaai-word/) doi:10.1609/AAAI.V30I1.9923BibTeX
@inproceedings{nam2016aaai-word,
title = {{A Word Embedding and a Josa Vector for Korean Unsupervised Semantic Role Induction}},
author = {Nam, Kyeong-Min and Kim, Yu-Seop},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {4240-4241},
doi = {10.1609/AAAI.V30I1.9923},
url = {https://mlanthology.org/aaai/2016/nam2016aaai-word/}
}