A Compare-Aggregate Model for Matching Text Sequences
Abstract
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general "compare-aggregate" framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
Cite
Text
Wang and Jiang. "A Compare-Aggregate Model for Matching Text Sequences." International Conference on Learning Representations, 2017.Markdown
[Wang and Jiang. "A Compare-Aggregate Model for Matching Text Sequences." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/wang2017iclr-compare/)BibTeX
@inproceedings{wang2017iclr-compare,
title = {{A Compare-Aggregate Model for Matching Text Sequences}},
author = {Wang, Shuohang and Jiang, Jing},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/wang2017iclr-compare/}
}