SEAL: Safety-Enhanced Aligned LLM Fine-Tuning via Bilevel Data Selection
Abstract
Fine-tuning on task-specific data to boost downstream performance is a crucial step for leveraging Large Language Models (LLMs). However, though fine-tuning enhances the model performance for specialized applications, previous studies have demonstrated that fine-tuning the models on several adversarial samples or even benign data can greatly comprise the model's pre-equipped alignment and safety capabilities. In this work, we propose SEAL, a novel framework to enhance safety in LLM fine-tuning. SEAL learns a data ranker based on the bilevel optimization to up rank the safe and high-quality fine-tuning data and down rank the unsafe or low-quality ones. Models trained with SEAL demonstrate superior quality over multiple baselines, with 8.5\% and 9.7\% win rate increase compared to random selection respectively on Llama-3-8b-Instruct and Merlinite-7b models. Our code is available on github https://github.com/hanshen95/SEAL.
Cite
Text
Shen et al. "SEAL: Safety-Enhanced Aligned LLM Fine-Tuning via Bilevel Data Selection." International Conference on Learning Representations, 2025.Markdown
[Shen et al. "SEAL: Safety-Enhanced Aligned LLM Fine-Tuning via Bilevel Data Selection." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/shen2025iclr-seal/)BibTeX
@inproceedings{shen2025iclr-seal,
title = {{SEAL: Safety-Enhanced Aligned LLM Fine-Tuning via Bilevel Data Selection}},
author = {Shen, Han and Chen, Pin-Yu and Das, Payel and Chen, Tianyi},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/shen2025iclr-seal/}
}