Pinto, Francesco

23 publications

NeurIPS 2025 AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration Andy Zhou, Kevin Wu, Francesco Pinto, Zhaorun Chen, Yi Zeng, Yu Yang, Shuang Yang, Sanmi Koyejo, James Zou, Bo Li
ICLR 2025 Copyright-Protected Language Generation via Adaptive Model Fusion Javier Abad, Konstantin Donhauser, Francesco Pinto, Fanny Yang
ICML 2025 Focus on This, Not That! Steering LLMs with Adaptive Feature Specification Tom A. Lamb, Adam Davies, Alasdair Paren, Philip Torr, Francesco Pinto
ICLRW 2025 Focus on This, Not That! Steering LLMs with Adaptive Feature Specification Tom A. Lamb, Adam Davies, Alasdair Paren, Philip Torr, Francesco Pinto
ICLR 2025 MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models Chejian Xu, Jiawei Zhang, Zhaorun Chen, Chulin Xie, Mintong Kang, Yujin Potter, Zhun Wang, Zhuowen Yuan, Alexander Xiong, Zidi Xiong, Chenhui Zhang, Lingzhi Yuan, Yi Zeng, Peiyang Xu, Chengquan Guo, Andy Zhou, Jeffrey Ziwei Tan, Xuandong Zhao, Francesco Pinto, Zhen Xiang, Yu Gai, Zinan Lin, Dan Hendrycks, Bo Li, Dawn Song
ICLR 2025 SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations Zhaorun Chen, Francesco Pinto, Minzhou Pan, Bo Li
ICLRW 2025 SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations Zhaorun Chen, Francesco Pinto, Minzhou Pan, Shuang Yang, Bo Li
ICLR 2025 Towards Certification of Uncertainty Calibration Under Adversarial Attacks Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz, Philip Torr, Adel Bibi
NeurIPS 2025 VMDT: Decoding the Trustworthiness of Video Foundation Models Yujin Potter, Zhun Wang, Nicholas Crispino, Kyle Montgomery, Alexander Xiong, Ethan Y. Chang, Francesco Pinto, Yuqi Chen, Rahul Gupta, Morteza Ziyadi, Christos Christodoulopoulos, Bo Li, Chenguang Wang, Dawn Song
NeurIPSW 2024 A Cautionary Tale on the Evaluation of Differentially Private In-Context Learning Anjun Hu, Jiyang Guan, Philip Torr, Francesco Pinto
ICML 2024 Extracting Training Data from Document-Based VQA Models Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari
ICMLW 2024 Extracting Training Data from Document-Based VQA Models Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari
NeurIPS 2024 Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models Arshia Hemmat, Adam Davies, Tom A. Lamb, Jianhao Yuan, Philip Torr, Ashkan Khakzar, Francesco Pinto
ICML 2024 Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image Generators Jianhao Yuan, Francesco Pinto, Adam Davies, Philip Torr
ICMLW 2024 Not Just Pretty Pictures: Toward Interventional Data Augmentation Using Text-to-Image Generators Jianhao Yuan, Francesco Pinto, Adam Davies, Philip Torr
ICMLW 2024 Strong Copyright Protection for Language Models via Adaptive Model Fusion Javier Abad, Konstantin Donhauser, Francesco Pinto, Fanny Yang
ICMLW 2023 Certified Calibration: Bounding Worst-Case Calibration Under Adversarial Attacks Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz, Philip Torr, Adel Bibi
ICLRW 2023 How to Make Semi-Private Learning Effective Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal
AAAI 2023 Sample-Dependent Adaptive Temperature Scaling for Improved Calibration Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
ECCV 2022 An Impartial Take to the CNN vs Transformer Robustness Contest Francesco Pinto, Philip H. S. Torr, Puneet K. Dokania
NeurIPS 2022 Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness Francesco Pinto, Harry Yang, Ser Nam Lim, Philip Torr, Puneet Dokania
NeurIPSW 2021 Are Vision Transformers Always More Robust than Convolutional Neural Networks? Francesco Pinto, Philip Torr, Puneet K. Dokania
NeurIPSW 2021 Mix-MaxEnt: Improving Accuracy and Uncertainty Estimates of Deterministic Neural Networks Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip Torr, Puneet K. Dokania