Learning Safety Constraints for Large Language Models

Abstract

Large language models (LLMs) have emerged as powerful tools but pose significant safety risks through harmful outputs and vulnerability to adversarial attacks. We propose SaP–short for Safety Polytope–a geometric approach to LLM safety, that learns and enforces multiple safety constraints directly in the model’s representation space. We develop a framework that identifies safe and unsafe regions via the polytope’s facets, enabling both detection and correction of unsafe outputs through geometric steering. Unlike existing approaches that modify model weights, SaP operates post-hoc in the representation space, preserving model capabilities while enforcing safety constraints. Experiments across multiple LLMs demonstrate that our method can effectively detect unethical inputs, reduce adversarial attack success rates while maintaining performance on standard tasks, thus highlighting the importance of having an explicit geometric model for safety. Analysis of the learned polytope facets reveals emergence of specialization in detecting different semantic notions of safety, providing interpretable insights into how safety is captured in LLMs’ representation space.

Cite

Text

Chen et al. "Learning Safety Constraints for Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Chen et al. "Learning Safety Constraints for Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chen2025icml-learning-a/)

BibTeX

@inproceedings{chen2025icml-learning-a,
  title     = {{Learning Safety Constraints for Large Language Models}},
  author    = {Chen, Xin and As, Yarden and Krause, Andreas},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {7664-7685},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/chen2025icml-learning-a/}
}