Huang, Hanxun

11 publications

TMLR 2026 Semantic-Aware Adversarial Fine-Tuning for CLIP Jiacheng Zhang, Jinhao Li, Hanxun Huang, Sarah Monazam Erfani, Benjamin I. P. Rubinstein, Feng Liu
NeurIPS 2025 BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun
ICLR 2025 Detecting Backdoor Samples in Contrastive Language Image Pretraining Hanxun Huang, Sarah Monazam Erfani, Yige Li, Xingjun Ma, James Bailey
CVPR 2025 Towards Million-Scale Adversarial Robustness Evaluation with Stronger Individual Attacks Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma
ICML 2025 X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP Hanxun Huang, Sarah Monazam Erfani, Yige Li, Xingjun Ma, James Bailey
MLJ 2024 Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
ICLR 2024 LDReg: Local Dimensionality Regularized Self-Supervised Learning Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
ICLR 2023 Distilling Cognitive Backdoor Patterns Within an Image Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
NeurIPS 2021 Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks Hanxun Huang, Yisen Wang, Sarah Erfani, Quanquan Gu, James Bailey, Xingjun Ma
ICLR 2021 Unlearnable Examples: Making Personal Data Unexploitable Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
ICML 2020 Normalized Loss Functions for Deep Learning with Noisy Labels Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey