The Impact of Symbolic Representations on In-Context Learning for Few-Shot Reasoning

Abstract

Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations (or "chain-of-thought" (CoT)) for in-context learning. On the other hand, those reasoning tasks are usually presumed to be more approachable for symbolic programming. To make progress towards understanding in-context learning, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from knowledge bases (KBs). Then we revisit neuro-symbolic approaches and design a model LMLP that learns from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog's backward chaining algorithm. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive and inductive reasoning settings, showing that LMLP enjoys much better length generalization even with substantially less parameters.

Cite

Text

Zhang et al. "The Impact of Symbolic Representations on In-Context Learning for Few-Shot Reasoning." NeurIPS 2022 Workshops: nCSI, 2022.

Markdown

[Zhang et al. "The Impact of Symbolic Representations on In-Context Learning for Few-Shot Reasoning." NeurIPS 2022 Workshops: nCSI, 2022.](https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-impact/)

BibTeX

@inproceedings{zhang2022neuripsw-impact,
  title     = {{The Impact of Symbolic Representations on In-Context Learning for Few-Shot Reasoning}},
  author    = {Zhang, Hanlin and Zhang, YiFan and Li, Li Erran and Xing, Eric},
  booktitle = {NeurIPS 2022 Workshops: nCSI},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-impact/}
}