Zero-Shot Conversational Summarization Evaluations with Small Large Language Models
Abstract
Large Language Models (LLMs) exhibit powerful summarization abilities. However, their capabilities on conversational summarization remains under explored. In this work we evaluate LLMs (~10 billion parameters) on conversational summarization and showcase their performance on various prompts. We show that the summaries generated by models depend on the instructions and the performance of LLMs vary with different instructions sometimes resulting steep drop in ROUGE scores if prompts are not selected carefully. We also evaluate the models with human evaluations and discuss the limitations of the models on conversational summarization.
Cite
Text
Manuvinakurike et al. "Zero-Shot Conversational Summarization Evaluations with Small Large Language Models." NeurIPS 2023 Workshops: R0-FoMo, 2023.Markdown
[Manuvinakurike et al. "Zero-Shot Conversational Summarization Evaluations with Small Large Language Models." NeurIPS 2023 Workshops: R0-FoMo, 2023.](https://mlanthology.org/neuripsw/2023/manuvinakurike2023neuripsw-zeroshot/)BibTeX
@inproceedings{manuvinakurike2023neuripsw-zeroshot,
title = {{Zero-Shot Conversational Summarization Evaluations with Small Large Language Models}},
author = {Manuvinakurike, Ramesh and Sahay, Saurav and Manepalli, Sangeeta and Nachman, Lama},
booktitle = {NeurIPS 2023 Workshops: R0-FoMo},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/manuvinakurike2023neuripsw-zeroshot/}
}