The Steganographic Potentials of Language Models

Abstract

The potential for large language models (LLMs) to hide messages within plain text (steganography) poses a challenge to detection and thwarting of unaligned AI agents, and undermines faithfulness of LLMs reasoning. We explore the steganographic capabilities of LLMs fine-tuned via reinforcement learning (RL) to: (1) develop covert encoding schemes, (2) engage in steganography when prompted, and (3) utilize steganography in realistic scenarios where hidden reasoning is likely, but not prompted. In these scenarios, we detect the intention of LLMs to hide their reasoning as well as their steganography performance. Our findings in the fine-tuning experiments as well as in behavioral non fine-tuning evaluations reveal that while current models exhibit rudimentary steganographic abilities in terms of security and capacity, explicit algorithmic guidance markedly enhances their capacity for information concealment.

Cite

Text

Karpov et al. "The Steganographic Potentials of Language Models." ICLR 2025 Workshops: BuildingTrust, 2025.

Markdown

[Karpov et al. "The Steganographic Potentials of Language Models." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/karpov2025iclrw-steganographic/)

BibTeX

@inproceedings{karpov2025iclrw-steganographic,
  title     = {{The Steganographic Potentials of Language Models}},
  author    = {Karpov, Artem and Adeleke, Tinuade and Cho, Seong Hah and Perez-Campanero, Natalia},
  booktitle = {ICLR 2025 Workshops: BuildingTrust},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/karpov2025iclrw-steganographic/}
}