Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework

Abstract

This paper investigates implicit biases in large language models (LLMs) triggered by subtle contextual cues. Through experiments, the study examines how these biases influence model outputs in domains such as healthcare and hiring. A framework for mitigating stereotype reinforcement is proposed, along with strategies to refine prompts and reduce biased responses. The goal is to improve fairness in AI-driven applications by addressing these biases and enhancing model equity.

Cite

Text

Donkor. "Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35329

Markdown

[Donkor. "Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/donkor2025aaai-exploring/) doi:10.1609/AAAI.V39I28.35329

BibTeX

@inproceedings{donkor2025aaai-exploring,
  title     = {{Exploring and Mitigating Implicit Bias in Large Language Models: A Cross-Domain Evaluation Framework}},
  author    = {Donkor, Precious},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29573-29575},
  doi       = {10.1609/AAAI.V39I28.35329},
  url       = {https://mlanthology.org/aaai/2025/donkor2025aaai-exploring/}
}