Towards Bridging Classical and Neural Computation Through a Read-Eval-Print Loop

Abstract

Humans rely on step-by-step reasoning to solve new problems, each step guided by the feedback of its effect on a potential solution. For complicated problems, such a sequence of step-by-step interactions might take place between the human and some sort of software system, like a Python interpreter, and the sequence of operations so obtained would then constitute an algorithm to solve a particular class of problems. Based on these ideas, this work proposes a general and scalable method to generate synthetic training data, which we in turn use to teach a Large Language Model to carry out new and previously unseen tasks. By tracing the execution of an algorithm, through careful transformations of the control flow elements, we can produce ``code traces'' containing step-by-step solutions for a range of problems. We empirically verify the usefulness of training on such data, and its superiority to tracing the state changes directly.

Cite

Text

Zhang et al. "Towards Bridging Classical and Neural Computation Through a Read-Eval-Print Loop." ICML 2024 Workshops: LLMs_and_Cognition, 2024.

Markdown

[Zhang et al. "Towards Bridging Classical and Neural Computation Through a Read-Eval-Print Loop." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/zhang2024icmlw-bridging/)

BibTeX

@inproceedings{zhang2024icmlw-bridging,
  title     = {{Towards Bridging Classical and Neural Computation Through a Read-Eval-Print Loop}},
  author    = {Zhang, David W. and Defferrard, Michaël and Rainone, Corrado and Memisevic, Roland},
  booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/zhang2024icmlw-bridging/}
}