Obadinma, Stephen

2 publications

NeurIPS 2025 On the Robustness of Verbal Confidence of LLMs in Adversarial Attacks Stephen Obadinma, Xiaodan Zhu
TMLR 2024 Calibration Attacks: A Comprehensive Study of Adversarial Attacks on Model Confidence Stephen Obadinma, Xiaodan Zhu, Hongyu Guo