Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning

Abstract

Large language models have shown astonishing performance on a wide range of reasoning tasks. In this paper, we investigate whether they could reason about real-world events and help improve the prediction performance of event sequence models. We design LAMP, a framework that integrates a large language model in event prediction. Particularly, the language model performs abductive reasoning to assist an event sequence model: the event model proposes predictions on future events given the past; instructed by a few expert-annotated demonstrations, the language model learns to suggest possible causes for each proposal; a search module finds out the previous events that match the causes; a scoring function learns to examine whether the retrieved events could actually cause the proposal. Through extensive experiments on several challenging real-world datasets, we demonstrate that our framework---thanks to the reasoning capabilities of large language models---could significantly outperform the state-of-the-art event sequence models.

Cite

Text

Shi et al. "Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning." Neural Information Processing Systems, 2023.

Markdown

[Shi et al. "Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/shi2023neurips-language/)

BibTeX

@inproceedings{shi2023neurips-language,
  title     = {{Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning}},
  author    = {Shi, Xiaoming and Xue, Siqiao and Wang, Kangrui and Zhou, Fan and Zhang, James and Zhou, Jun and Tan, Chenhao and Mei, Hongyuan},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/shi2023neurips-language/}
}