Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution
Abstract
Document summarization has greatly benefited from advances in large language models (LLMs). In real-world situations, summaries often need to be generated from multiple documents with diverse sources and authors, lacking a clear information flow. Naively concatenating these documents and generating a summary can lead to poorly structured narratives and redundancy. Additionally, attributing each part of the generated summary to a specific source is crucial for reliability. In this study, we address multi-document summarization with attribution using our proposed solution ***MiDAS-PRo***, consisting of three stages: (i) Planning the hierarchical organization of source documents, (ii) Reasoning by generating relevant entities/topics, and (iii) Summary Generation. We treat the first two sub-problems as a code completion task for LLMs. By incorporating well-selected in-context learning examples through a graph attention network, LLMs effectively generate plans and reason topics for a document collection. Experiments on summarizing scientific articles from public datasets show that our approach outperforms state-of-the-art baselines in both automated and human evaluations.
Cite
Text
Nandy and Bandyopadhyay. "Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34676Markdown
[Nandy and Bandyopadhyay. "Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/nandy2025aaai-language/) doi:10.1609/AAAI.V39I23.34676BibTeX
@inproceedings{nandy2025aaai-language,
title = {{Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution}},
author = {Nandy, Abhilash and Bandyopadhyay, Sambaran},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {24930-24938},
doi = {10.1609/AAAI.V39I23.34676},
url = {https://mlanthology.org/aaai/2025/nandy2025aaai-language/}
}