PROSAC: Provably Safe Certification for Machine Learning Models Under Adversarial Attacks

Cite

Text

Feng et al. "PROSAC: Provably Safe Certification for Machine Learning Models Under Adversarial Attacks." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I3.32300

Markdown

[Feng et al. "PROSAC: Provably Safe Certification for Machine Learning Models Under Adversarial Attacks." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/feng2025aaai-prosac/) doi:10.1609/AAAI.V39I3.32300

BibTeX

@inproceedings{feng2025aaai-prosac,
  title     = {{PROSAC: Provably Safe Certification for Machine Learning Models Under Adversarial Attacks}},
  author    = {Feng, Chen and Liu, Ziquan and Zhi, Zhuo and Bogunovic, Ilija and Gerner-Beuerle, Carsten and Rodrigues, Miguel},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {2933-2941},
  doi       = {10.1609/AAAI.V39I3.32300},
  url       = {https://mlanthology.org/aaai/2025/feng2025aaai-prosac/}
}