Improving Branching Language via Self-Reflection
Abstract
While most language is formatted linearly, applications such as planning, trees of thought, and branching narrative are represented in a tree structure. Generating branching outputs from a language model (LM) is trivial, but representing trees of text in a one dimensional input is problematic. This makes popular self-reflection methods of improvement prohibitive for branching language. In this work, we address this limitation by proposing a new method for improving trees of branching language. Our method iterates between reflecting on sampled paths through a tree and resampling problematic subtrees. We evaluate our method on a branching narrative task with the objective of improving every path through the tree. Our method creates narrative that is preferred 60% more than unmodified narrative trees by an LM judge. Our method also scales to tree depths that cause naive methods of self-reflection to fail.
Cite
Text
Nottingham et al. "Improving Branching Language via Self-Reflection." NeurIPS 2024 Workshops: LanGame, 2024.Markdown
[Nottingham et al. "Improving Branching Language via Self-Reflection." NeurIPS 2024 Workshops: LanGame, 2024.](https://mlanthology.org/neuripsw/2024/nottingham2024neuripsw-improving/)BibTeX
@inproceedings{nottingham2024neuripsw-improving,
title = {{Improving Branching Language via Self-Reflection}},
author = {Nottingham, Kolby and Dong, Ruo-Ping and Kasper, Ben and Kerr, Wesley N.},
booktitle = {NeurIPS 2024 Workshops: LanGame},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/nottingham2024neuripsw-improving/}
}