Raza, Shaina

4 publications

MLJ 2025 Developing Safe and Responsible Large Language Model: Can We Balance Bias Reduction and Language Understanding? Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Fatemeh Tavakoli, Deepak John Reji, Syed Raza Bashir
ICML 2025 Position: Beyond Assistance – Reimagining LLMs as Ethical and Adaptive Co-Creators in Mental Health Care Abeer Badawi, Md Tahmid Rahman Laskar, Jimmy Huang, Shaina Raza, Elham Dolatabadi
NeurIPSW 2024 Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths? Veronica Chatrath, Marcelo Lotif, Shaina Raza
NeurIPSW 2024 Safe and Sound: Evaluating Language Models for Bias Mitigation and Understanding Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Deval Pandya