Dual-View Variational Autoencoders for Semi-Supervised Text Matching

Abstract

Semantically matching two text sequences (usually two sentences) is a fundamental problem in NLP. Most previous methods either encode each of the two sentences into a vector representation (sentence-level embedding) or leverage word-level interaction features between the two sentences. In this study, we propose to take the sentence-level embedding features and the word-level interaction features as two distinct views of a sentence pair, and unify them with a framework of Variational Autoencoders such that the sentence pair is matched in a semi-supervised manner. The proposed model is referred to as Dual-View Variational AutoEncoder (DV-VAE), where the optimization of the variational lower bound can be interpreted as an implicit Co-Training mechanism for two matching models over distinct views. Experiments on SNLI, Quora and a Community Question Answering dataset demonstrate the superiority of our DV-VAE over several strong semi-supervised and supervised text matching models.

Cite

Text

Xie and Ma. "Dual-View Variational Autoencoders for Semi-Supervised Text Matching." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/737

Markdown

[Xie and Ma. "Dual-View Variational Autoencoders for Semi-Supervised Text Matching." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/xie2019ijcai-dual/) doi:10.24963/IJCAI.2019/737

BibTeX

@inproceedings{xie2019ijcai-dual,
  title     = {{Dual-View Variational Autoencoders for Semi-Supervised Text Matching}},
  author    = {Xie, Zhongbin and Ma, Shuai},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5306-5312},
  doi       = {10.24963/IJCAI.2019/737},
  url       = {https://mlanthology.org/ijcai/2019/xie2019ijcai-dual/}
}