Rathbun, Ethan

2 publications

ICML 2025 Adversarial Inception Backdoor Attacks Against Reinforcement Learning Ethan Rathbun, Alina Oprea, Christopher Amato
NeurIPS 2024 SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents Ethan Rathbun, Christopher Amato, Alina Oprea