BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models
Abstract
Large Language Models (LLMs) like ChatGPT and GPT-4 are versatile and capable of addressing open-domain question-answering(QA) tasks effectively. However, general LLMs, which are developed on open-domain data, may lack the domain-specific knowledge essential for tasks in vertical domains, such as legal, medical, etc. To address this issue, previous approaches either conduct continuous pre-training with domain-specific data or employ retrieval augmentation to support general LLMs in handling QA tasks. Unfortunately, these strategies are either cost-intensive or unreliable in practical applications. To this end, we present a novel framework named BLADE, which enhances Black-box LArge language models with small Domain-spEcific models. BLADE consists of a black-box LLM and a small domain-specific LM. The small LM preserves domain-specific knowledge and offers specialized insights, while the general LLM contributes robust language comprehension and reasoning capabilities. Specifically, our method involves three steps: 1) pre-training the small LM with domain-specific data, 2) fine-tuning this model using knowledge instruction data, and 3) joint Bayesian optimization of the general LLM and the small LM. In our experiments, we verify the effectiveness of BLADE on diverse LLMs and datasets across different domains. This shows the potential of BLADE as an effective and cost-efficient solution in adapting general LLMs for vertical domains.
Cite
Text
Li et al. "BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34620Markdown
[Li et al. "BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/li2025aaai-blade/) doi:10.1609/AAAI.V39I23.34620BibTeX
@inproceedings{li2025aaai-blade,
title = {{BLADE: Enhancing Black-Box Large Language Models with Small Domain-Specific Models}},
author = {Li, Haitao and Ai, Qingyao and Chen, Jia and Dong, Qian and Wu, Zhijing and Liu, Yiqun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {24422-24430},
doi = {10.1609/AAAI.V39I23.34620},
url = {https://mlanthology.org/aaai/2025/li2025aaai-blade/}
}