Biggio, Battista

21 publications

ICLR 2025 $\sigma$-Zero: Gradient-Based Optimization of $\ell_0$-Norm Adversarial Examples Antonio Emanuele Cinà, Francesco Villani, Maura Pintor, Lea Schönherr, Battista Biggio, Marcello Pelillo
AAAI 2025 AttackBench: Evaluating Gradient-Based Attacks for Adversarial Examples Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed, Fabio Roli
NeurIPS 2025 TransferBench: Benchmarking Ensemble-Based Black-Box Transfer Attacks Fabio Brau, Maura Pintor, Antonio Emanuele Cinà, Raffaele Mura, Luca Scionis, Luca Oneto, Fabio Roli, Battista Biggio
ICMLW 2024 BUILD: Buffer-Free Incremental Learning with OOD Detection for the Wild Srishti Gupta, Daniele Angioni, Lea Schönherr, Ambra Demontis, Battista Biggio
AAAI 2024 When Your AI Becomes a Target: AI Security Incidents and Best Practices Kathrin Grosse, Lukas Bieringer, Tarek R. Besold, Battista Biggio, Alexandre Alahi
ICCVW 2023 Adversarial Attacks Against Uncertainty Quantification Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
WACV 2023 Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai
ICMLW 2022 ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness Against Adversarial Patches Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
NeurIPS 2022 Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
IJCAI 2022 Tessellation-Filtering ReLU Neural Networks Bernhard Alois Moser, Michal Lewandowski, Somayeh Kargaran, Werner Zellinger, Battista Biggio, Christoph Koutschan
ICMLW 2021 Adversarial EXEmples: Functionality-Preserving Optimization of Adversarial Windows Malware Luca Demetrio, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
NeurIPS 2021 Fast Minimum-Norm Adversarial Attacks Through Adaptive Norm Constraints Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
ICMLW 2021 Fast Minimum-Norm Adversarial Attacks Through Adaptive Norm Constraints Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
ICMLW 2021 Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
ECML-PKDD 2020 Poisoning Attacks on Algorithmic Fairness David Solans, Battista Biggio, Carlos Castillo
ICCVW 2017 Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
ICML 2015 Is Feature Selection Secure Against Training Data Poisoning? Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli
ECML-PKDD 2013 Evasion Attacks Against Machine Learning at Test Time Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli
ICML 2012 Poisoning Attacks Against Support Vector Machines Battista Biggio, Blaine Nelson, Pavel Laskov
ACML 2011 Microbagging Estimators: An Ensemble Approach to Distance-Weighted Classifiers Blaine Nelson, Battista Biggio, Pavel Laskov
ACML 2011 Support Vector Machines Under Adversarial Label Noise Battista Biggio, Blaine Nelson, Pavel Laskov