Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study

Abstract

Large Language Models (LLMs) are highly vulnerable to input perturbations, as even a small prompt change may result in a substantially different output. Existing methods to enhance LLM robustness are primarily focused on perturbed data samples, whereas improving resiliency to perturbations of task-level instructions has remained relatively underexplored. In this work, we focus on character- and word-level edits of task-specific instructions, which substantially degrade downstream performance. We experiment with a variety of techniques to enhance the robustness of LLMs, including self-denoising and representation alignment, testing different models (Llama 3 and Flan-T5), datasets (CoLa, QNLI, SST-2) and instructions (both task-oriented and role-oriented). We find that, on average, self-denoising—whether performed by a frozen LLM or a fine-tuned model—achieves substantially higher performance gains than alternative strategies, including more complex baselines such as ensembling and supervised methods.

Cite

Text

Agrawal et al. "Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study." ICLR 2025 Workshops: BuildingTrust, 2025.

Markdown

[Agrawal et al. "Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/agrawal2025iclrw-enhancing/)

BibTeX

@inproceedings{agrawal2025iclrw-enhancing,
  title     = {{Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study}},
  author    = {Agrawal, Aryan and Alazraki, Lisa and Honarvar, Shahin and Rei, Marek},
  booktitle = {ICLR 2025 Workshops: BuildingTrust},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/agrawal2025iclrw-enhancing/}
}