Contrastive Triple Extraction with Generative Transformer

Abstract

Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.

Cite

Text

Ye et al. "Contrastive Triple Extraction with Generative Transformer." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I16.17677

Markdown

[Ye et al. "Contrastive Triple Extraction with Generative Transformer." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/ye2021aaai-contrastive/) doi:10.1609/AAAI.V35I16.17677

BibTeX

@inproceedings{ye2021aaai-contrastive,
  title     = {{Contrastive Triple Extraction with Generative Transformer}},
  author    = {Ye, Hongbin and Zhang, Ningyu and Deng, Shumin and Chen, Mosha and Tan, Chuanqi and Huang, Fei and Chen, Huajun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {14257-14265},
  doi       = {10.1609/AAAI.V35I16.17677},
  url       = {https://mlanthology.org/aaai/2021/ye2021aaai-contrastive/}
}