Multi-OphthaLingua: A Multilingual Benchmark for Assessing and Debiasing LLM Ophthalmological QA in LMICs

Abstract

Current ophthalmology clinical workflows are plagued by over-referrals, long waits, and complex and heterogeneous medical records. Large language models (LLMs) present a promising solution to automate various procedures such as triaging, preliminary tests like visual acuity assessment, and report summaries. However, LLMs have demonstrated significantly varied performance across different languages in natural language question-answering tasks, potentially exacerbating healthcare disparities in Low and Middle-Income Countries (LMICs). This study introduces the first multilingual ophthalmological question-answering benchmark with manually curated questions parallel across languages, allowing for direct cross-lingual comparisons. Our evaluation of 6 popular LLMs across 7 different languages reveals substantial bias across different languages, highlighting risks for clinical deployment of LLMs in LMICs. Existing debiasing methods such as Translation Chain-of-Thought or Retrieval-augmented generation (RAG) by themselves fall short of closing this performance gap, often failing to improve performance across all languages and lacking specificity for the medical domain. To address this issue, We propose CLARA (Cross-Lingual Reflective Agentic system), a novel inference time de-biasing method leveraging retrieval augmented generation and self-verification. Our approach not only improves performance across all languages but also significantly reduces the multilingual bias gap, facilitating equitable LLM application across the globe.

Cite

Text

Restrepo et al. "Multi-OphthaLingua: A Multilingual Benchmark for Assessing and Debiasing LLM Ophthalmological QA in LMICs." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I27.35053

Markdown

[Restrepo et al. "Multi-OphthaLingua: A Multilingual Benchmark for Assessing and Debiasing LLM Ophthalmological QA in LMICs." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/restrepo2025aaai-multi/) doi:10.1609/AAAI.V39I27.35053

BibTeX

@inproceedings{restrepo2025aaai-multi,
  title     = {{Multi-OphthaLingua: A Multilingual Benchmark for Assessing and Debiasing LLM Ophthalmological QA in LMICs}},
  author    = {Restrepo, David S. and Wu, Chenwei and Tang, Zhengxu and Shuai, Zitao and Phan, Thao Nguyen Minh and Ding, Jun-En and Dao, Cong-Tinh and Gallifant, Jack and Dychiao, Robyn Gayle and Artiaga, Jose Carlo and Bando, André Hiroshi and Gracitelli, Carolina Pelegrini Barbosa and Ferrer, Vincenz and Celi, Leo Anthony and Bitterman, Danielle S. and Morley, Michael G. and Nakayama, Luis Filipe},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {28321-28330},
  doi       = {10.1609/AAAI.V39I27.35053},
  url       = {https://mlanthology.org/aaai/2025/restrepo2025aaai-multi/}
}