Andriushchenko, Maksym

34 publications

ICLR 2025 AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents Maksym Andriushchenko, Alexandra Souly, Mateusz Dziemian, Derek Duenas, Maxwell Lin, Justin Wang, Dan Hendrycks, Andy Zou, J Zico Kolter, Matt Fredrikson, Yarin Gal, Xander Davies
UAI 2025 Critical Influence of Overparameterization on Sharpness-Aware Minimization Sungbin Shin, Dongyeop Lee, Maksym Andriushchenko, Namhoon Lee
ICLR 2025 Does Refusal Training in LLMs Generalize to the past Tense? Maksym Andriushchenko, Nicolas Flammarion
ICLR 2025 Is In-Context Learning Sufficient for Instruction Following in LLMs? Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
ICLR 2025 Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
NeurIPS 2025 OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents Thomas Kuntz, Agatha Duzan, Hao Zhao, Francesco Croce, J Zico Kolter, Nicolas Flammarion, Maksym Andriushchenko
NeurIPSW 2024 Does Refusal Training in LLMs Generalize to the past Tense? Maksym Andriushchenko, Nicolas Flammarion
NeurIPSW 2024 Does Refusal Training in LLMs Generalize to the past Tense? Maksym Andriushchenko, Nicolas Flammarion
NeurIPSW 2024 Exploring Memorization and Copyright Violation in Frontier LLMs: A Study of the New York Times V. OpenAI 2023 Lawsuit Joshua Freeman, Chloe Rippe, Edoardo Debenedetti, Maksym Andriushchenko
NeurIPS 2024 Improving Alignment and Robustness with Circuit Breakers Andy Zou, Long Phan, Justin Wang, Derek Duenas, Maxwell Lin, Maksym Andriushchenko, Rowan Wang, Zico Kolter, Matt Fredrikson, Dan Hendrycks
NeurIPSW 2024 Is In-Context Learning Sufficient for Instruction Following in LLMs? Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
NeurIPS 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
ICMLW 2024 JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong
ICMLW 2024 Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
ICLR 2024 Layer-Wise Linear Mode Connectivity Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, Martin Jaggi
ICML 2024 Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
ICLRW 2024 Scaling Compute Is Not All You Need for Adversarial Robustness Edoardo Debenedetti, Zishen Wan, Maksym Andriushchenko, Vikash Sehwag, Kshitij Bhardwaj, Bhavya Kailkhura
NeurIPS 2024 Why Do We Need Weight Decay in Modern Deep Learning? Francesco D'Angelo, Maksym Andriushchenko, Aditya Varre, Nicolas Flammarion
ICML 2023 A Modern Look at the Relationship Between Sharpness and Generalization Maksym Andriushchenko, Francesco Croce, Maximilian Müller, Matthias Hein, Nicolas Flammarion
ICML 2023 SGD with Large Step Sizes Learns Sparse Features Maksym Andriushchenko, Aditya Vardhan Varre, Loucas Pillaud-Vivien, Nicolas Flammarion
NeurIPS 2023 Sharpness-Aware Minimization Leads to Low-Rank Features Maksym Andriushchenko, Dara Bahri, Hossein Mobahi, Nicolas Flammarion
NeurIPS 2023 Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings Klim Kireev, Maksym Andriushchenko, Carmela Troncoso, Nicolas Flammarion
NeurIPSW 2023 Why Do We Need Weight Decay for Overparameterized Deep Networks? Francesco D'Angelo, Aditya Varre, Maksym Andriushchenko, Nicolas Flammarion
CVPRW 2022 ARIA: Adversarially Robust Image Attribution for Content Provenance Maksym Andriushchenko, Xiaoyang Rebecca Li, Geoffrey Oxholm, Thomas Gittings, Tu Bui, Nicolas Flammarion, John P. Collomosse
UAI 2022 On the Effectiveness of Adversarial Training Against Common Corruptions Klim Kireev, Maksym Andriushchenko, Nicolas Flammarion
AAAI 2022 Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein
ICML 2022 Towards Understanding Sharpness-Aware Minimization Maksym Andriushchenko, Nicolas Flammarion
ICLR 2021 On the Stability of Fine-Tuning BERT: Misconceptions, Explanations, and Strong Baselines Marius Mosbach, Maksym Andriushchenko, Dietrich Klakow
ECCV 2020 Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein
NeurIPS 2020 Understanding and Improving Fast Adversarial Training Maksym Andriushchenko, Nicolas Flammarion
AISTATS 2019 Provable Robustness of ReLU Networks via Maximization of Linear Regions Francesco Croce, Maksym Andriushchenko, Matthias Hein
NeurIPS 2019 Provably Robust Boosted Decision Stumps and Trees Against Adversarial Attacks Maksym Andriushchenko, Matthias Hein
CVPRW 2019 Why ReLU Networks Yield High-Confidence Predictions Far Away from the Training Data and How to Mitigate the Problem Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
NeurIPS 2017 Formal Guarantees on the Robustness of a Classifier Against Adversarial Manipulation Matthias Hein, Maksym Andriushchenko