PEAK: Pyramid Evaluation via Automated Knowledge Extraction
Abstract
Evaluating the selection of content in a summary is important both for human-written summaries, which can be a useful pedagogical tool for reading and writing skills, and machine-generated summaries, which are increasingly being deployed in information management. The pyramid method assesses a summary by aggregating content units from the summaries of a wise crowd (a form of crowdsourcing). It has proven highly reliable but has largely depended on manual annotation. We propose PEAK, the first method to automatically assess summary content using the pyramid method that also generates the pyramid content models. PEAK relies on open information extraction and graph algorithms. The resulting scores correlate well with manually derived pyramid scores on both human and machine summaries, opening up the possibility of wide-spread use in numerous applications.
Cite
Text
Yang et al. "PEAK: Pyramid Evaluation via Automated Knowledge Extraction." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10336Markdown
[Yang et al. "PEAK: Pyramid Evaluation via Automated Knowledge Extraction." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/yang2016aaai-peak/) doi:10.1609/AAAI.V30I1.10336BibTeX
@inproceedings{yang2016aaai-peak,
title = {{PEAK: Pyramid Evaluation via Automated Knowledge Extraction}},
author = {Yang, Qian and Passonneau, Rebecca J. and de Melo, Gerard},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2016},
pages = {2673-2680},
doi = {10.1609/AAAI.V30I1.10336},
url = {https://mlanthology.org/aaai/2016/yang2016aaai-peak/}
}