Robustness in Large Language Models: A Survey of Mitigation Strategies and Evaluation Metrics
Abstract
Large Language Models (LLMs) have emerged as a promising cornerstone for the development of natural language processing (NLP) and artificial intelligence (AI). However, ensuring the robustness of LLMs remains a critical challenge. To address these challenges and advance the field, this survey provides a comprehensive overview of current studies in this area. First, we systematically examine the nature of robustness in LLMs, including its conceptual foundations, the importance of consistent performance across diverse inputs, and the implications of failure modes in real-world applications. Next, we analyze the sources of non-robustness, categorizing intrinsic model limitations, data-driven vulnerabilities, and external adversarial factors that compromise reliability. Following this, we review state-of-the-art mitigation strategies, and then we discuss widely adopted benchmarks, emerging metrics, and persistent gaps in assessing real-world reliability. Finally, we synthesize findings from existing surveys and interdisciplinary studies to highlight trends, unresolved issues, and pathways for future research.
Cite
Text
Kumar and Mishra. "Robustness in Large Language Models: A Survey of Mitigation Strategies and Evaluation Metrics." Transactions on Machine Learning Research, 2025.Markdown
[Kumar and Mishra. "Robustness in Large Language Models: A Survey of Mitigation Strategies and Evaluation Metrics." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kumar2025tmlr-robustness/)BibTeX
@article{kumar2025tmlr-robustness,
title = {{Robustness in Large Language Models: A Survey of Mitigation Strategies and Evaluation Metrics}},
author = {Kumar, Pankaj and Mishra, Subhankar},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/kumar2025tmlr-robustness/}
}