Llama-Annotate - Visualizing Token-Level Confidences for LLMs

Abstract

LLaMA-Annotate is a tool that allows visually inspecting the confidences that a large language model assigns to individual tokens, and the alternative tokens considered for that position. We provide both a simple, non-interactive command-line interface, as well as a more elaborate web application. Besides generally helping to form an intuition about the “thinking” of the LLM, our tool can be used for context-aware spellchecking, or to see how a different prompt or a differently trained LLM can impact the interpretation of a piece of text. The tool can be tried online at https://huggingface.co/spaces/s-t-j/llama-annotate .

Cite

Text

Schultheis and John. "Llama-Annotate - Visualizing Token-Level Confidences for LLMs." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2024. doi:10.1007/978-3-031-70371-3_33

Markdown

[Schultheis and John. "Llama-Annotate - Visualizing Token-Level Confidences for LLMs." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2024.](https://mlanthology.org/ecmlpkdd/2024/schultheis2024ecmlpkdd-llamaannotate/) doi:10.1007/978-3-031-70371-3_33

BibTeX

@inproceedings{schultheis2024ecmlpkdd-llamaannotate,
  title     = {{Llama-Annotate - Visualizing Token-Level Confidences for LLMs}},
  author    = {Schultheis, Erik and John, St},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2024},
  pages     = {424-428},
  doi       = {10.1007/978-3-031-70371-3_33},
  url       = {https://mlanthology.org/ecmlpkdd/2024/schultheis2024ecmlpkdd-llamaannotate/}
}