Zhang, Amy

77 publications

ICLR 2025 An Optimal Discriminator Weighted Imitation Perspective for Reinforcement Learning Haoran Xu, Shuozhe Li, Harshit Sikchi, Scott Niekum, Amy Zhang
ICLRW 2025 Augmented Conditioning Is Enough for Effective Training Image Generation Jiahui Chen, Amy Zhang, Adriana Romero-Soriano
ICLR 2025 EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation Carl Qi, Dan Haramati, Tal Daniel, Aviv Tamar, Amy Zhang
NeurIPS 2025 ExPO: Unlocking Hard Reasoning with Self-Explanation-Guided Reinforcement Learning Ruiyang Zhou, Shuozhe Li, Amy Zhang, Liu Leqi
NeurIPS 2025 Information-Theoretic Reward Decomposition for Generalizable RLHF Liyuan Mao, Haoran Xu, Amy Zhang, Weinan Zhang, Chenjia Bai
ICLR 2025 Learning a Fast Mixing Exogenous Block MDP Using a Single Trajectory Alexander Levine, Peter Stone, Amy Zhang
ICLR 2025 MaestroMotif: Skill Design from Artificial Intelligence Feedback Martin Klissarov, Mikael Henaff, Roberta Raileanu, Shagun Sodhani, Pascal Vincent, Amy Zhang, Pierre-Luc Bacon, Doina Precup, Marlos C. Machado, Pierluca D'Oro
ICLR 2025 Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning Caleb Chuck, Fan Feng, Carl Qi, Chang Shi, Siddhant Agarwal, Amy Zhang, Scott Niekum
ICML 2025 Proto Successor Measure: Representing the Behavior Space of an RL Agent Siddhant Agarwal, Harshit Sikchi, Peter Stone, Amy Zhang
ICLRW 2025 RL Zero: Zero-Shot Language to Behaviors Without Any Supervision Harshit Sikchi, Siddhant Agarwal, Pranaya Jajoo, Samyak Parajuli, Caleb Chuck, Max Rudolph, Peter Stone, Amy Zhang, Scott Niekum
NeurIPS 2025 RLZero: Direct Policy Inference from Language Without In-Domain Supervision Harshit Sikchi, Siddhant Agarwal, Pranaya Jajoo, Samyak Parajuli, Caleb Chuck, Max Rudolph, Peter Stone, Amy Zhang, Scott Niekum
ICLR 2025 Towards General-Purpose Model-Free Reinforcement Learning Scott Fujimoto, Pierluca D'Oro, Amy Zhang, Yuandong Tian, Michael Rabbat
NeurIPS 2025 Uni-RL: Unifying Online and Offline RL via Implicit Value Regularization Haoran Xu, Liyuan Mao, Hui Jin, Weinan Zhang, Xianyuan Zhan, Amy Zhang
CoRL 2024 A Dual Approach to Imitation Learning from Observations with Offline Datasets Harshit Sikchi, Caleb Chuck, Amy Zhang, Scott Niekum
NeurIPS 2024 AMAGO-2: Breaking the Multi-Task Barrier in Meta-Reinforcement Learning with Transformers Jake Grigsby, Justin Sasek, Samyak Parajuli, Daniel Adebi, Amy Zhang, Yuke Zhu
L4DC 2024 An Investigation of Time Reversal Symmetry in Reinforcement Learning Brett Barkley, Amy Zhang, David Fridovich-Keil
NeurIPS 2024 Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning Liyuan Mao, Haoran Xu, Xianyuan Zhan, Weinan Zhang, Amy Zhang
ICLR 2024 Dual RL: Unification and New Methods for Reinforcement and Imitation Learning Harshit Sikchi, Qinqing Zheng, Amy Zhang, Scott Niekum
NeurIPS 2024 Efficient Reinforcement Learning by Discovering Neural Pathways Samin Yeasar Arnob, Riyasat Ohib, Sergey Plis, Amy Zhang, Alessandro Sordoni, Doina Precup
ICLR 2024 Language Control Diffusion: Efficiently Scaling Through Space, Time, and Tasks Edwin Zhang, Yujie Lu, Shinda Huang, William Yang Wang, Amy Zhang
ICLR 2024 Motif: Intrinsic Motivation from Artificial Intelligence Feedback Martin Klissarov, Pierluca D'Oro, Shagun Sodhani, Roberta Raileanu, Pierre-Luc Bacon, Pascal Vincent, Amy Zhang, Mikael Henaff
ICLR 2024 Score Models for Offline Goal-Conditioned Reinforcement Learning Harshit Sikchi, Rohan Chitnis, Ahmed Touati, Alborz Geramifard, Amy Zhang, Scott Niekum
NeurIPS 2024 SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions Zizhao Wang, Jiaheng Hu, Caleb Chuck, Stephen Chen, Roberto Martín-Martín, Amy Zhang, Scott Niekum, Peter Stone
JAIR 2024 Structure in Deep Reinforcement Learning: A Survey and Open Problems Aditya Mohan, Amy Zhang, Marius Lindauer
ICLR 2024 Towards Robust Offline Reinforcement Learning Under Diverse Data Corruption Rui Yang, Han Zhong, Jiawei Xu, Amy Zhang, Chongjie Zhang, Lei Han, Tong Zhang
ICLR 2024 When Should We Prefer Decision Transformers for Offline Reinforcement Learning? Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang
ICML 2024 Zero-Shot Reinforcement Learning via Function Encoders Tyler Ingebrand, Amy Zhang, Ufuk Topcu
JAIR 2023 A Survey of Zero-Shot Generalisation in Deep Reinforcement Learning Robert Kirk, Amy Zhang, Edward Grefenstette, Tim Rocktäschel
NeurIPS 2023 Accelerating Exploration with Unlabeled Prior Data Qiyang Li, Jason Zhang, Dibya Ghosh, Amy Zhang, Sergey Levine
ICLR 2023 BC-IRL: Learning Generalizable Reward Functions from Demonstrations Andrew Szot, Amy Zhang, Dhruv Batra, Zsolt Kira, Franziska Meier
ICMLW 2023 Conditional Bisimulation for Generalization in Reinforcement Learning Anuj Mahajan, Amy Zhang
NeurIPSW 2023 CuriousWalk: Enhancing Multi-Hop Reasoning in Graphs with Random Network Distillation Varun Kausika, Saurabh Jha, Adya Jha, Amy Zhang, Michael Sury
NeurIPS 2023 F-Policy Gradients: A General Framework for Goal-Conditioned RL Using F-Divergences Siddhant Agarwal, Ishan Durugkar, Peter Stone, Amy Zhang
ICLR 2023 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
ICLRW 2023 Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods Harshit Sikchi, Amy Zhang, Scott Niekum
TMLR 2023 Improving Generalization with Approximate Factored Value Functions Shagun Sodhani, Sergey Levine, Amy Zhang
ICML 2023 LIV: Language-Image Representations and Rewards for Robotic Control Yecheng Jason Ma, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman
ICLRW 2023 LIV: Language-Image Representations and Rewards for Robotic Control Yecheng Jason Ma, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman
ICLR 2023 Latent State Marginalization as a Low-Cost Approach for Improving Exploration Dinghuai Zhang, Aaron Courville, Yoshua Bengio, Qinqing Zheng, Amy Zhang, Ricky T. Q. Chen
TMLR 2023 Learning Representations for Pixel-Based Control: What Matters and Why? Manan Tomar, Utkarsh Aashu Mishra, Amy Zhang, Matthew E. Taylor
NeurIPSW 2023 Motif: Intrinsic Motivation from Artificial Intelligence Feedback Martin Klissarov, Pierluca D'Oro, Shagun Sodhani, Roberta Raileanu, Pierre-Luc Bacon, Pascal Vincent, Amy Zhang, Mikael Henaff
NeurIPSW 2023 Motif: Intrinsic Motivation from Artificial Intelligence Feedback Martin Klissarov, Pierluca D'Oro, Shagun Sodhani, Roberta Raileanu, Pierre-Luc Bacon, Pascal Vincent, Amy Zhang, Mikael Henaff
ICML 2023 Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning Tongzhou Wang, Antonio Torralba, Phillip Isola, Amy Zhang
NeurIPS 2023 Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability Hanlin Zhu, Amy Zhang
NeurIPSW 2023 Score-Models for Offline Goal-Conditioned Reinforcement Learning Harshit Sikchi, Rohan Chitnis, Ahmed Touati, Alborz Geramifard, Amy Zhang, Scott Niekum
NeurIPSW 2023 Target Rate Optimization: Avoiding Iterative Error Exploitation Braham Snyder, Amy Zhang, Yuke Zhu
ICLR 2023 VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
ICML 2022 Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine
L4DC 2022 Block Contextual MDPs for Continual Learning Shagun Sodhani, Franziska Meier, Joelle Pineau, Amy Zhang
ICML 2022 Denoised MDPs: Learning World Models Better than the World Itself Tongzhou Wang, Simon Du, Antonio Torralba, Phillip Isola, Amy Zhang, Yuandong Tian
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
NeurIPSW 2022 Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
ICLRW 2022 Improving Generalization with Approximate Factored Value Functions Shagun Sodhani, Sergey Levine, Amy Zhang
NeurIPSW 2022 LAD: Language Augmented Diffusion for Reinforcement Learning Edwin Zhang, Yujie Lu, William Yang Wang, Amy Zhang
ICML 2022 Online Decision Transformer Qinqing Zheng, Amy Zhang, Aditya Grover
AAAI 2022 Predicting the Influence of Fake and Real News Spreaders (Student Abstract) Amy Zhang, Aaron Brookhouse, Daniel Hammer, Francesca Spezzano, Liljana Babinkostova
ICML 2022 Robust Policy Learning over Multiple Uncertainty Sets Annie Xie, Shagun Sodhani, Chelsea Finn, Joelle Pineau, Amy Zhang
NeurIPSW 2022 Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
NeurIPSW 2022 VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
NeurIPSW 2022 VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
NeurIPSW 2021 Block Contextual MDPs for Continual Learning Shagun Sodhani, Franziska Meier, Joelle Pineau, Amy Zhang
AAAI 2021 Improving Sample Efficiency in Model-Free Reinforcement Learning from Images Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, Rob Fergus
ICLR 2021 Learning Invariant Representations for Reinforcement Learning Without Reconstruction Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine
ICLR 2021 Learning Robust State Abstractions for Hidden-Parameter Block MDPs Amy Zhang, Shagun Sodhani, Khimya Khetarpal, Joelle Pineau
ICML 2021 Multi-Task Reinforcement Learning with Context-Based Representations Shagun Sodhani, Amy Zhang, Joelle Pineau
ICML 2021 Out-of-Distribution Generalization via Risk Extrapolation (REx) David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, Aaron Courville
NeurIPS 2021 Why Generalization in RL Is Difficult: Epistemic POMDPs and Implicit Partial Observability Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine
ICML 2020 Invariant Causal Prediction for Block MDPs Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup
L4DC 2020 Plan2Vec: Unsupervised Representation Learning by Latent Plans Ge Yang, Amy Zhang, Ari Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra
UAI 2020 Stable Policy Optimization via Off-Policy Divergence Regularization Ahmed Touati, Amy Zhang, Joelle Pineau, Pascal Vincent
ICML 2018 Composable Planning with Attributes Amy Zhang, Sainbayar Sukhbaatar, Adam Lerer, Arthur Szlam, Rob Fergus
UAI 2012 Guess Who Rated This Movie: Identifying Users Through Subspace Clustering Amy Zhang, Nadia Fawaz, Stratis Ioannidis, Andrea Montanari