Improving Faithfulness in Abstractive Text Summarization with EDUs Using BART (Student Abstract)
Abstract
Abstractive text summarization uses the summarizer’s own words to capture the main information of a source document in a summary. While it is more challenging to automate than extractive text summarization, recent advancements in deep learning approaches and pre-trained language models have improved its performance. However, abstractive text summarization still has issues such as unfaithfulness. To address this problem, we propose a new approach that utilizes important Elementary Discourse Units (EDUs) to guide BART-based text summarization. Our approach showed the improvement in truthfulness and source document coverage in comparison to some previous studies.
Cite
Text
Delpisheh and Chali. "Improving Faithfulness in Abstractive Text Summarization with EDUs Using BART (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30433Markdown
[Delpisheh and Chali. "Improving Faithfulness in Abstractive Text Summarization with EDUs Using BART (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/delpisheh2024aaai-improving/) doi:10.1609/AAAI.V38I21.30433BibTeX
@inproceedings{delpisheh2024aaai-improving,
title = {{Improving Faithfulness in Abstractive Text Summarization with EDUs Using BART (Student Abstract)}},
author = {Delpisheh, Narjes and Chali, Yllias},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {23471-23472},
doi = {10.1609/AAAI.V38I21.30433},
url = {https://mlanthology.org/aaai/2024/delpisheh2024aaai-improving/}
}