Fredrikson, Matt

24 publications

ICLR 2025 AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents Maksym Andriushchenko, Alexandra Souly, Mateusz Dziemian, Derek Duenas, Maxwell Lin, Justin Wang, Dan Hendrycks, Andy Zou, J Zico Kolter, Matt Fredrikson, Yarin Gal, Xander Davies
ICLR 2025 Aligned LLMs Are Not Aligned Browser Agents Priyanshu Kumar, Elaine Lau, Saranya Vijayakumar, Tu Trinh, Elaine T Chang, Vaughn Robinson, Shuyan Zhou, Matt Fredrikson, Sean M. Hendryx, Summer Yue, Zifan Wang
NeurIPS 2025 Safety Pretraining: Toward the Next Generation of Safe AI Pratyush Maini, Sachin Goyal, Dylan Sam, Alexander Robey, Yash Savani, Yiding Jiang, Andy Zou, Matt Fredrikson, Zachary Chase Lipton, J Zico Kolter
NeurIPS 2025 Security Challenges in AI Agent Deployment: Insights from a Large Scale Public Competition Andy Zou, Maxwell Lin, Eliot Krzysztof Jones, Micha V. Nowak, Mateusz Dziemian, Nick Winter, Valent Nathanael, Ayla Croft, Xander Davies, Jai Patel, Robert Kirk, Yarin Gal, Dan Hendrycks, J Zico Kolter, Matt Fredrikson
ICLR 2024 A Recipe for Improved Certifiable Robustness Kai Hu, Klas Leino, Zifan Wang, Matt Fredrikson
NeurIPS 2024 Efficient LLM Jailbreak via Adaptive Dense-to-Sparse Constrained Optimization Kai Hu, Weichen Yu, Yining Li, Tianjun Yao, Xiang Li, Wenhe Liu, Lijun Yu, Zhiqiang Shen, Kai Chen, Matt Fredrikson
NeurIPS 2024 Improving Alignment and Robustness with Circuit Breakers Andy Zou, Long Phan, Justin Wang, Derek Duenas, Maxwell Lin, Maksym Andriushchenko, Rowan Wang, Zico Kolter, Matt Fredrikson, Dan Hendrycks
NeurIPSW 2024 Infecting LLM Agents via Generalizable Adversarial Attack Weichen Yu, Kai Hu, Tianyu Pang, Chao Du, Min Lin, Matt Fredrikson
NeurIPS 2023 Grounding Neural Inference with Satisfiability Modulo Theories Zifan Wang, Saranya Vijayakumar, Kaiji Lu, Vijay Ganesh, Somesh Jha, Matt Fredrikson
ICLR 2023 On the Perils of Cascading Robust Classifiers Ravi Mangal, Zifan Wang, Chi Zhang, Klas Leino, Corina Pasareanu, Matt Fredrikson
NeurIPS 2023 Unlocking Deterministic Robustness Certification on ImageNet Kai Hu, Andy Zou, Zifan Wang, Klas Leino, Matt Fredrikson
ICLR 2022 Consistent Counterfactuals for Deep Models Emily Black, Zifan Wang, Matt Fredrikson
TMLR 2022 Degradation Attacks on Certifiably Robust Neural Networks Klas Leino, Chi Zhang, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Pasareanu
ICML 2022 Robust Models Are More Interpretable Because Attributions Look Normal Zifan Wang, Matt Fredrikson, Anupam Datta
ICLR 2022 Selective Ensembles for Consistent Predictions Emily Black, Klas Leino, Matt Fredrikson
ICLR 2021 Fast Geometric Projections for Local Robustness Certification Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, Corina Pasareanu
ICML 2021 Globally-Robust Neural Networks Klas Leino, Zifan Wang, Matt Fredrikson
NeurIPS 2021 Relaxing Local Robustness Klas Leino, Matt Fredrikson
IJCAI 2020 Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness Samuel Yeom, Matt Fredrikson
CVPRW 2020 Interpreting Interpretations: Organizing Attribution Methods by Criteria Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson
AISTATS 2020 Learning Fair Representations for Kernel Models Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar
NeurIPS 2020 Smoothed Geometry for Robust Attribution Zifan Wang, Haofan Wang, Shakul Ramkumar, Piotr Mardziel, Matt Fredrikson, Anupam Datta
ICLR 2019 Feature-Wise Bias Amplification Klas Leino, Emily Black, Matt Fredrikson, Shayak Sen, Anupam Datta
NeurIPS 2018 Hunting for Discriminatory Proxies in Linear Regression Models Samuel Yeom, Anupam Datta, Matt Fredrikson