Knowledge-Enhanced Hierarchical Attention for Community Question Answering with Multi-Task and Adaptive Learning
Abstract
In this paper, we propose a Knowledge-enhanced Hierarchical Attention for community question answering with Multi-task learning and Adaptive learning (KHAMA). First, we propose a hierarchical attention network to fully fuse knowledge from input documents and knowledge base (KB) by exploiting the semantic compositionality of the input sequences. The external factual knowledge helps recognize background knowledge (entity mentions and their relationships) and eliminate noise information from long documents that have sophisticated syntactic and semantic structures. In addition, we build multiple CQA models with adaptive boosting and then combine these models to learn a more effective and robust CQA system. Further- more, KHAMA is a multi-task learning model. It regards CQA as the primary task and question categorization as the auxiliary task, aiming at learning a category-aware document encoder and enhance the quality of identifying essential information from long questions. Extensive experiments on two benchmarks demonstrate that KHAMA achieves substantial improvements over the compared methods.
Cite
Text
Yang et al. "Knowledge-Enhanced Hierarchical Attention for Community Question Answering with Multi-Task and Adaptive Learning." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/743Markdown
[Yang et al. "Knowledge-Enhanced Hierarchical Attention for Community Question Answering with Multi-Task and Adaptive Learning." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/yang2019ijcai-knowledge/) doi:10.24963/IJCAI.2019/743BibTeX
@inproceedings{yang2019ijcai-knowledge,
title = {{Knowledge-Enhanced Hierarchical Attention for Community Question Answering with Multi-Task and Adaptive Learning}},
author = {Yang, Min and Chen, Lei and Chen, Xiaojun and Wu, Qingyao and Zhou, Wei and Shen, Ying},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {5349-5355},
doi = {10.24963/IJCAI.2019/743},
url = {https://mlanthology.org/ijcai/2019/yang2019ijcai-knowledge/}
}