I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
Abstract
Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks. Despite recent advances, these models are still prone to what are commonly known as hallucinations, causing them to emit unwanted and factually incorrect text. In this work, we propose a novel calibration method that can be used to combat hallucinations. We add a special [IDK] (“I Don't Know”) token to the model's vocabulary and introduce an objective function that shifts probability mass to the [IDK] token for incorrect predictions. This approach allows the model to express uncertainty in its output explicitly. We evaluate our proposed method across multiple model architectures and factual downstream tasks.We find that models trained with our method are able to express uncertainty in places where they would previously make mistakes while suffering only a small loss of encoded knowledge. We further perform extensive ablation studies of multiple variations of our approach and provide a detailed analysis of the precision-recall tradeoff of our method.
Cite
Text
Cohen et al. "I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token." Neural Information Processing Systems, 2024. doi:10.52202/079017-0349Markdown
[Cohen et al. "I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/cohen2024neurips-don/) doi:10.52202/079017-0349BibTeX
@inproceedings{cohen2024neurips-don,
title = {{I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token}},
author = {Cohen, Roi and Dobler, Konstantin and Biran, Eden and de Melo, Gerard},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0349},
url = {https://mlanthology.org/neurips/2024/cohen2024neurips-don/}
}