Investigating and Mitigating Undesirable Biases in Large Language Models

Abstract

The rise of large language models (LLMs) has revolutionized natural language processing, offering immense capabilities across various applications. The widespread integration of these models into commonplace technology has brought to light deep concerns about the biases they encompass, which could serve to perpetuate negative preconceptions and social injustices. The scope of my research includes social biases, brand biases, the impact of personas on bias, and stereotypes in low-resource languages. My contributions aim to deepen our understanding of these biases and develop methodologies to mitigate them, enhancing the fairness and utility of LLMs across diverse global applications.

Cite

Text

Kamruzzaman. "Investigating and Mitigating Undesirable Biases in Large Language Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35214

Markdown

[Kamruzzaman. "Investigating and Mitigating Undesirable Biases in Large Language Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/kamruzzaman2025aaai-investigating/) doi:10.1609/AAAI.V39I28.35214

BibTeX

@inproceedings{kamruzzaman2025aaai-investigating,
  title     = {{Investigating and Mitigating Undesirable Biases in Large Language Models}},
  author    = {Kamruzzaman, Mahammed},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29273-29274},
  doi       = {10.1609/AAAI.V39I28.35214},
  url       = {https://mlanthology.org/aaai/2025/kamruzzaman2025aaai-investigating/}
}