Oprea, Alina

12 publications

ICML 2025 Adversarial Inception Backdoor Attacks Against Reinforcement Learning Ethan Rathbun, Alina Oprea, Christopher Amato
ICLR 2024 Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman
ICLR 2024 One-Shot Empirical Privacy Estimation for Federated Learning Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh Brendan McMahan, Vinith Menon Suriyakumar
NeurIPS 2024 SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents Ethan Rathbun, Christopher Amato, Alina Oprea
NeurIPSW 2023 One-Shot Empirical Privacy Estimation for Federated Learning Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh McMahan, Vinith Suriyakumar
ICMLW 2023 TMI! Finetuned Models Spill Secrets from Pretraining John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman
NeurIPS 2023 Unleashing the Power of Randomization in Auditing Differentially Private ML Krishna Pillutla, Galen Andrew, Peter Kairouz, H. Brendan McMahan, Alina Oprea, Sewoong Oh
ICMLW 2023 Unleashing the Power of Randomization in Auditing Differentially Private ML Krishna Pillutla, Galen Andrew, Peter Kairouz, Hugh Brendan McMahan, Alina Oprea, Sewoong Oh
NeurIPSW 2023 User Inference Attacks on LLMs Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher Choquette-Choo, Zheng Xu
NeurIPSW 2023 User Inference Attacks on Large Language Models Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu
NeurIPS 2020 Auditing Differentially Private Machine Learning: How Private Is Private SGD? Matthew Jagielski, Jonathan Ullman, Alina Oprea
ICML 2019 Differentially Private Fair Learning Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman