Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension Using Large Language Models
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities for medical question answering and programming, but their potential for generating interpretable computable phenotypes (CPs) is under-explored. In this work, we investigate whether LLMs can generate accurate and concise CPs for six clinical phenotypes of varying complexity, which could be leveraged to enable scalable clinical decision support to improve care for patients with hypertension. In addition to evaluating zero-short performance, we propose and test a synthesize, execute, debug, instruct strategy that uses LLMs to generate and iteratively refine CPs using data-driven feedback. Our results show that LLMs, coupled with iterative learning, can generate interpretable and reasonably accurate programs that approach the performance of state-of-the-art ML methods while requiring significantly fewer training examples.
Cite
Text
Aldeia et al. "Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension Using Large Language Models." Proceedings of the 10th Machine Learning for Healthcare Conference, 2025.Markdown
[Aldeia et al. "Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension Using Large Language Models." Proceedings of the 10th Machine Learning for Healthcare Conference, 2025.](https://mlanthology.org/mlhc/2025/aldeia2025mlhc-iterative/)BibTeX
@inproceedings{aldeia2025mlhc-iterative,
title = {{Iterative Learning of Computable Phenotypes for Treatment Resistant Hypertension Using Large Language Models}},
author = {Aldeia, Guilherme Seidyo Imai and Herman, Daniel S and La Cava, William},
booktitle = {Proceedings of the 10th Machine Learning for Healthcare Conference},
year = {2025},
volume = {298},
url = {https://mlanthology.org/mlhc/2025/aldeia2025mlhc-iterative/}
}