Constrain Alignment with Sparse Autoencoders
Abstract
The alignment of large language models (LLMs) with human preferences remains a key challenge. While post-training techniques like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have achieved notable success, they often experience computational inefficiencies and training instability. In this paper, we propose Feature-level constrained Preference Optimization (FPO), a novel method designed to simplify the alignment process while ensuring stability. FPO leverages pre-trained Sparse Autoencoders (SAEs) and introduces feature-level constraints, allowing for efficient, sparsity-enforced alignment. Our approach enjoys efficiency by using sparse features activated in a well-trained sparse autoencoder and the quality of sequential KL divergence by using the feature-level offline reference. Experimental results on benchmark datasets demonstrate that FPO achieves an above 5% absolute improvement in win rate with much lower computational cost compared to state-of-the-art baselines, making it a promising solution for efficient and controllable LLM alignments.
Cite
Text
Yin et al. "Constrain Alignment with Sparse Autoencoders." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Yin et al. "Constrain Alignment with Sparse Autoencoders." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/yin2025icml-constrain/)BibTeX
@inproceedings{yin2025icml-constrain,
title = {{Constrain Alignment with Sparse Autoencoders}},
author = {Yin, Qingyu and Leong, Chak Tou and Zhang, Hongbo and Zhu, Minjun and Yan, Hanqi and Zhang, Qiang and He, Yulan and Li, Wenjie and Wang, Jun and Zhang, Yue and Yang, Linyi},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {72349-72363},
volume = {267},
url = {https://mlanthology.org/icml/2025/yin2025icml-constrain/}
}