SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding
Abstract
Large Language Models (LLMs) demonstrate remarkable emergent abilities across various tasks, yet fall short of complex reasoning and planning tasks. The tree-search-based reasoning methods address this by encouraging the exploration of intermediate steps, surpassing the capabilities of chain-of-thought prompting. However, significant inference latency is introduced due to the systematic exploration and evaluation of multiple thought paths. This paper introduces SEED, a novel and efficient inference framework to improve both runtime speed and GPU memory management concurrently. Based on a scheduled speculative execution, SEED efficiently handles multiple iterations for thought generation and state evaluation, leveraging a rounds-scheduled strategy to manage draft model dispatching. Extensive experimental evaluations on three reasoning datasets demonstrate the superior speedup performance of SEED.
Cite
Text
Wang et al. "SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding." NeurIPS 2024 Workshops: Compression, 2024.Markdown
[Wang et al. "SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/wang2024neuripsw-seed/)BibTeX
@inproceedings{wang2024neuripsw-seed,
title = {{SEED: Accelerating Reasoning Tree Construction via Scheduled Speculative Decoding}},
author = {Wang, Zhenglin and Wu, Jialong and Lai, Yilong and Zhang, Congzhi and Zhou, Deyu},
booktitle = {NeurIPS 2024 Workshops: Compression},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/wang2024neuripsw-seed/}
}