Fine-Tuning Language Models with Collaborative and Semantic Experts
Abstract
Recent advancements in large language models (LLMs) have broadened their application scope but revealed challenges in balancing capabilities across general knowledge, coding, and mathematics. To address this, we introduce a Collaborative and Semantic Experts (CoE) approach for supervised fine-tuning (SFT), which employs a two-phase training strategy. Initially, expert training fine-tunes the feed-forward network on specialized datasets, developing distinct experts in targeted domains. Subsequently, expert leveraging synthesizes these trained experts into a structured model with semantic guidance to activate specific experts, enhancing performance and interpretability. Evaluations on comprehensive benchmarks across MMLU, HumanEval, GSM8K, MT-Bench, and AlpacaEval confirm CoE's efficacy, demonstrating improved performance and expert collaboration in diverse tasks, significantly outperforming traditional SFT methods.
Cite
Text
Yang et al. "Fine-Tuning Language Models with Collaborative and Semantic Experts." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I24.34753Markdown
[Yang et al. "Fine-Tuning Language Models with Collaborative and Semantic Experts." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/yang2025aaai-fine/) doi:10.1609/AAAI.V39I24.34753BibTeX
@inproceedings{yang2025aaai-fine,
title = {{Fine-Tuning Language Models with Collaborative and Semantic Experts}},
author = {Yang, Jiaxi and Hui, Binyuan and Yang, Min and Yang, Jian and Zhang, Lei and Qu, Qiang and Lin, Junyang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {25624-25632},
doi = {10.1609/AAAI.V39I24.34753},
url = {https://mlanthology.org/aaai/2025/yang2025aaai-fine/}
}