Nearest Neighbor Speculative Decoding for LLM Generation and Attribution

Abstract

Large language models (LLMs) often hallucinate and lack the ability to provide attribution for their generations. Semi-parametric LMs, such as kNN-LM, approach these limitations by refining the output of an LM for a given prompt using its nearest neighbor matches in a non-parametric data store. However, these models often exhibit slow inference speeds and produce non-fluent texts. In this paper, we introduce Nearest Neighbor Speculative Decoding (NEST), a novel semi-parametric language modeling approach that is capable of incorporating real-world text spans of arbitrary length into the LM generations and providing attribution to their sources. NEST performs token-level retrieval at each inference step to compute a semi-parametric mixture distribution and identify promising span continuations in a corpus. It then uses an approximate speculative decoding procedure that accepts a prefix of the retrieved span or generates a new token. NEST significantly enhances the generation quality and attribution rate of the base LM across a variety of knowledge-intensive tasks, surpassing the conventional kNN-LM method and performing competitively with in-context retrieval augmentation. In addition, NEST substantially improves the generation speed, achieving a 1.8x speedup in inference time when applied to Llama-2-Chat 70B. Code will be released at https://github.com/facebookresearch/NEST/tree/main.

Cite

Text

Li et al. "Nearest Neighbor Speculative Decoding for LLM Generation and Attribution." Neural Information Processing Systems, 2024. doi:10.52202/079017-2574

Markdown

[Li et al. "Nearest Neighbor Speculative Decoding for LLM Generation and Attribution." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/li2024neurips-nearest/) doi:10.52202/079017-2574

BibTeX

@inproceedings{li2024neurips-nearest,
  title     = {{Nearest Neighbor Speculative Decoding for LLM Generation and Attribution}},
  author    = {Li, Minghan and Chen, Xilun and Holtzman, Ari and Chen, Beidi and Lin, Jimmy and Yih, Wen-tau and Lin, Xi Victoria},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2574},
  url       = {https://mlanthology.org/neurips/2024/li2024neurips-nearest/}
}