Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)
Abstract
This paper proposes two Unsupervised constituent Parsing models (UPOA and UPIO) that calculate inside and outside association scores solely based on the self-attention weight matrix learned in a pretrained language model. The proposed unsupervised parsing models are further extended to few-shot parsing models (FPOA, FPIO) that use a few annotated trees to fine-tune the linear projection matrices in self-attention. Experiments on PTB and SPRML show that both unsupervised and few-shot parsing methods are better than or comparable to the previous methods.
Cite
Text
Zeng and Xiong. "Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/797Markdown
[Zeng and Xiong. "Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/zeng2023ijcai-unsupervised/) doi:10.24963/IJCAI.2023/797BibTeX
@inproceedings{zeng2023ijcai-unsupervised,
title = {{Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)}},
author = {Zeng, Zhiyuan and Xiong, Deyi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6995-7000},
doi = {10.24963/IJCAI.2023/797},
url = {https://mlanthology.org/ijcai/2023/zeng2023ijcai-unsupervised/}
}