Ma, Xingjun

49 publications

AAAI 2025 AIM: Additional Image Guided Generation of Transferable Adversarial Attacks Teng Li, Xingjun Ma, Yu-Gang Jiang
CVPR 2025 Anyattack: Towards Large-Scale Self-Supervised Adversarial Attacks on Vision-Language Models Jiaming Zhang, Junhong Ye, Xingjun Ma, Yige Li, Yunfan Yang, Yunhao Chen, Jitao Sang, Dit-Yan Yeung
NeurIPS 2025 BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models Yige Li, Hanxun Huang, Yunhan Zhao, Xingjun Ma, Jun Sun
ICLR 2025 BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks Yunhan Zhao, Xiang Zheng, Lin Luo, Yige Li, Xingjun Ma, Yu-Gang Jiang
AAAI 2025 CALM: Curiosity-Driven Auditing for Large Language Models Xiang Zheng, Longxiang Wang, Yi Liu, Xingjun Ma, Chao Shen, Cong Wang
ICLR 2025 Detecting Backdoor Samples in Contrastive Language Image Pretraining Hanxun Huang, Sarah Monazam Erfani, Yige Li, Xingjun Ma, James Bailey
ICCV 2025 Free-Form Motion Control: Controlling the 6d Poses of Camera and Objects in Video Generation Xincheng Shuai, Henghui Ding, Zhenyuan Qin, Hao Luo, Xingjun Ma, Dacheng Tao
AAAI 2025 HoneypotNet: Backdoor Attacks Against Model Extraction Yixu Wang, Tianle Gu, Yan Teng, Yingchun Wang, Xingjun Ma
ICCV 2025 IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves Ruofan Wang, Juncheng Li, Yixu Wang, Bo Wang, Xiaosen Wang, Yan Teng, Yingchun Wang, Xingjun Ma, Yu-Gang Jiang
NeurIPS 2025 JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models Jiaxin Song, Yixu Wang, Jie Li, Xuan Tong, Rui Yu, Yan Teng, Xingjun Ma, Yingchun Wang
NeurIPS 2025 OmniSVG: A Unified Scalable Vector Graphics Generation Model Yiying Yang, Wei Cheng, Sijin Chen, Xianfang Zeng, Fukun Yin, Jiaxu Zhang, Liao Wang, Gang Yu, Xingjun Ma, Yu-Gang Jiang
NeurIPS 2025 SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models Ye Sun, Hao Zhang, Henghui Ding, Tiehua Zhang, Xingjun Ma, Yu-Gang Jiang
NeurIPS 2025 SafeVid: Toward Safety Aligned Video Large Multimodal Models Yixu Wang, Jiaxin Song, Yifeng Gao, Xin Wang, Yang Yao, Yan Teng, Xingjun Ma, Yingchun Wang, Yu-Gang Jiang
ICCV 2025 StolenLoRA: Exploring LoRA Extraction Attacks via Synthetic Data Yixu Wang, Yan Teng, Yingchun Wang, Xingjun Ma
CVPR 2025 TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models Xin Wang, Kai Chen, Jiaming Zhang, Jingjing Chen, Xingjun Ma
CVPR 2025 Towards Million-Scale Adversarial Robustness Evaluation with Stronger Individual Attacks Yong Xie, Weijie Zheng, Hanxun Huang, Guangnan Ye, Xingjun Ma
ICML 2025 X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP Hanxun Huang, Sarah Monazam Erfani, Yige Li, Xingjun Ma, James Bailey
ECCV 2024 Adversarial Prompt Tuning for Vision-Language Models Jiaming Zhang, Xingjun Ma, Xin Wang, Lingyu Qiu, Jiaqi Wang, Yu-Gang Jiang, Jitao Sang
IJCAI 2024 Constrained Intrinsic Motivation for Reinforcement Learning Xiang Zheng, Xingjun Ma, Chao Shen, Cong Wang
MLJ 2024 Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang
ICLR 2024 LDReg: Local Dimensionality Regularized Self-Supervised Learning Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey
NeurIPS 2024 UnSeg: One Universal Unlearnable Example Generator Is Enough Against All Image Segmentation Ye Sun, Hao Zhang, Tiehua Zhang, Xingjun Ma, Yu-Gang Jiang
ICLR 2023 Distilling Cognitive Backdoor Patterns Within an Image Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
ICML 2023 Reconstructive Neuron Pruning for Backdoor Defense Yige Li, Xixiang Lyu, Xingjun Ma, Nodens Koren, Lingjuan Lyu, Bo Li, Yu-Gang Jiang
ICLR 2023 Transferable Unlearnable Examples Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang
CVPR 2023 Unlearnable Clusters: Towards Label-Agnostic Unlearnable Examples Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yu-Gang Jiang, Yaowei Wang, Changsheng Xu
NeurIPS 2022 CalFAT: Calibrated Federated Adversarial Training with Label Skewness Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu
ICLR 2022 Few-Shot Backdoor Attacks on Visual Object Tracking Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia
NeurIPS 2021 $\alpha$-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression Jiabo He, Sarah Erfani, Xingjun Ma, James Bailey, Ying Chi, Xian-Sheng Hua
ICMLW 2021 Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions Nodens Koren, Xingjun Ma, Qiuhong Ke, Yisen Wang, James Bailey
NeurIPS 2021 Anti-Backdoor Learning: Training Clean Models on Poisoned Data Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
NeurIPS 2021 Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks Hanxun Huang, Yisen Wang, Sarah Erfani, Quanquan Gu, James Bailey, Xingjun Ma
NeurIPS 2021 Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low
ICLR 2021 Improving Adversarial Robustness via Channel-Wise Activation Suppressing Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang
ICLR 2021 Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
IJCAI 2021 Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting Ang Li, Qiuhong Ke, Xingjun Ma, Haiqin Weng, Zhiyuan Zong, Feng Xue, Rui Zhang
ICCV 2021 Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better Bojia Zi, Shihao Zhao, Xingjun Ma, Yu-Gang Jiang
ICLR 2021 Unlearnable Examples: Making Personal Data Unexploitable Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, Yisen Wang
ICLR 2020 Improving Adversarial Robustness Requires Revisiting Misclassified Examples Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu
ICML 2020 Normalized Loss Functions for Deep Learning with Noisy Labels Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey
ECCV 2020 Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu
ECCV 2020 Short-Term and Long-Term Context Aggregation Network for Video Inpainting Ang Li, Shanshan Zhao, Xingjun Ma, Mingming Gong, Jianzhong Qi, Rui Zhang, Dacheng Tao, Ramamohanarao Kotagiri
ICLR 2020 Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
IJCAI 2019 Generative Image Inpainting with Submanifold Alignment Ang Li, Jianzhong Qi, Rui Zhang, Xingjun Ma, Kotagiri Ramamohanarao
ICML 2019 On the Convergence and Robustness of Adversarial Training Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, Quanquan Gu
ICLR 2018 Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey
ICML 2018 Dimensionality-Driven Learning with Noisy Labels Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, James Bailey
IJCAI 2017 Adversarial Generation of Real-Time Feedback with Neural Networks for Simulation-Based Training Xingjun Ma, Sudanthi N. R. Wijewickrema, Shuo Zhou, Yun Zhou, Zakaria Mhammedi, Stephen J. O'Leary, James Bailey
AAAI 2017 Unbiased Multivariate Correlation Analysis Yisen Wang, Simone Romano, Vinh Nguyen, James Bailey, Xingjun Ma, Shu-Tao Xia