PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications

Abstract

Developing intelligent pediatric consultation systems offers promising prospects for improving diagnostic efficiency, especially in China, where healthcare resources are scarce. Despite recent advances in Large Language Models (LLMs) for Chinese medicine, their performance is sub-optimal in pediatric applications due to inadequate instruction data and vulnerable training procedures.To address the above issues, this paper builds PedCorpus, a high-quality dataset of over 300,000 multi-task instructions from pediatric textbooks, guidelines, and knowledge graph resources to fulfil diverse diagnostic demands. Upon well-designed PedCorpus, we propose PediatricsGPT, the first Chinese pediatric LLM assistant built on a systematic and robust training pipeline.In the continuous pre-training phase, we introduce a hybrid instruction pre-training mechanism to mitigate the internal-injected knowledge inconsistency of LLMs for medical domain adaptation. Immediately, the full-parameter Supervised Fine-Tuning (SFT) is utilized to incorporate the general medical knowledge schema into the models. After that, we devise a direct following preference optimization to enhance the generation of pediatrician-like humanistic responses. In the parameter-efficient secondary SFT phase,a mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery. Extensive results based on the metrics, GPT-4, and doctor evaluations on distinct downstream tasks show that PediatricsGPT consistently outperforms previous Chinese medical LLMs. The project and data will be released at https://github.com/ydk122024/PediatricsGPT.

Cite

Text

Yang et al. "PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications." Neural Information Processing Systems, 2024. doi:10.52202/079017-4398

Markdown

[Yang et al. "PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yang2024neurips-pediatricsgpt/) doi:10.52202/079017-4398

BibTeX

@inproceedings{yang2024neurips-pediatricsgpt,
  title     = {{PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications}},
  author    = {Yang, Dingkang and Wei, Jinjie and Xiao, Dongling and Wang, Shunli and Wu, Tong and Li, Gang and Li, Mingcheng and Wang, Shuaibing and Chen, Jiawei and Jiang, Yue and Xu, Qingyao and Li, Ke and Zhai, Peng and Zhang, Lihua},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4398},
  url       = {https://mlanthology.org/neurips/2024/yang2024neurips-pediatricsgpt/}
}