Enhancing Low-Resource Relation Representations Through Multi-View Decoupling

Abstract

Recently, prompt-tuning with pre-trained language models (PLMs) has demonstrated the significantly enhancing ability of relation extraction (RE) tasks. However, in low-resource scenarios, where the available training data is scarce, previous prompt-based methods may still perform poorly for prompt-based representation learning due to a superficial understanding of the relation. To this end, we highlight the importance of learning high-quality relation representation in low-resource scenarios for RE, and propose a novel prompt-based relation representation method, named MVRE (Multi-View Relation Extraction), to better leverage the capacity of PLMs to improve the performance of RE within the low-resource prompt-tuning paradigm. Specifically, MVRE decouples each relation into different perspectives to encompass multi-view relation representations for maximizing the likelihood during relation inference. Furthermore, we also design a Global-Local loss and a Dynamic-Initialization method for better alignment of the multi-view relation-representing virtual words, containing the semantics of relation labels during the optimization learning process and initialization. Extensive experiments on three benchmark datasets show that our method can achieve state-of-the-art in low-resource settings.

Cite

Text

Fan et al. "Enhancing Low-Resource Relation Representations Through Multi-View Decoupling." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I16.29752

Markdown

[Fan et al. "Enhancing Low-Resource Relation Representations Through Multi-View Decoupling." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/fan2024aaai-enhancing/) doi:10.1609/AAAI.V38I16.29752

BibTeX

@inproceedings{fan2024aaai-enhancing,
  title     = {{Enhancing Low-Resource Relation Representations Through Multi-View Decoupling}},
  author    = {Fan, Chenghao and Wei, Wei and Qu, Xiaoye and Lu, Zhenyi and Xie, Wenfeng and Cheng, Yu and Chen, Dangyang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {17968-17976},
  doi       = {10.1609/AAAI.V38I16.29752},
  url       = {https://mlanthology.org/aaai/2024/fan2024aaai-enhancing/}
}