Bagnell, Drew

22 publications

NeurIPS 2025 To Distill or Decide? Understanding the Algorithmic Trade-Off in Partially Observable RL Yuda Song, Dhruv Rohatgi, Aarti Singh, Drew Bagnell
ICML 2024 Hybrid Inverse Reinforcement Learning Juntao Ren, Gokul Swamy, Steven Wu, Drew Bagnell, Sanjiban Choudhury
ICML 2024 Hybrid Reinforcement Learning from Offline Observation Alone Yuda Song, Drew Bagnell, Aarti Singh
ICMLW 2024 The Importance of Online Data: Understanding Preference Fine-Tuning via Coverage Yuda Song, Gokul Swamy, Aarti Singh, Drew Bagnell, Wen Sun
ICMLW 2023 Complementing a Policy with a Different Observation Space Gokul Swamy, Sanjiban Choudhury, Drew Bagnell, Steven Wu
ICLR 2023 Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient Yuda Song, Yifei Zhou, Ayush Sekhari, Drew Bagnell, Akshay Krishnamurthy, Wen Sun
ICML 2023 Inverse Reinforcement Learning Without Reinforcement Learning Gokul Swamy, David Wu, Sanjiban Choudhury, Drew Bagnell, Steven Wu
ICML 2023 The Virtues of Laziness in Model-Based RL: A Unified Objective and Algorithms Anirudh Vemula, Yuda Song, Aarti Singh, Drew Bagnell, Sanjiban Choudhury
ICML 2022 Causal Imitation Learning Under Temporally Correlated Noise Gokul Swamy, Sanjiban Choudhury, Drew Bagnell, Steven Wu
NeurIPSW 2022 Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient Yuda Song, Yifei Zhou, Ayush Sekhari, Drew Bagnell, Akshay Krishnamurthy, Wen Sun
NeurIPSW 2021 What Would the Expert $do(\cdot)$?: Causal Imitation Learning Gokul Swamy, Sanjiban Choudhury, Drew Bagnell, Steven Wu
ICML 2019 Provably Efficient Imitation Learning from Observation Alone Wen Sun, Anirudh Vemula, Byron Boots, Drew Bagnell
AISTATS 2014 Near Optimal Bayesian Active Learning for Decision Making Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, Drew Bagnell, Siddhartha S. Srinivasa
ICML 2013 Learning Policies for Contextual Submodular Prediction Stephane Ross, Jiaji Zhou, Yisong Yue, Debadeepta Dey, Drew Bagnell
ICML 2012 Agnostic System Identification for Model-Based Reinforcement Learning Stéphane Ross, Drew Bagnell
NeurIPS 2012 Efficient High Dimensional Maximum Entropy Modeling via Symmetric Partition Functions Paul Vernaza, Drew Bagnell
AISTATS 2012 SpeedBoost: Anytime Prediction with Uniform Near-Optimality Alex Grubb, Drew Bagnell
AISTATS 2011 A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning Stephane Ross, Geoffrey Gordon, Drew Bagnell
ICML 2011 Computational Rationalization: The Inverse Equilibrium Problem Kevin Waugh, Brian D. Ziebart, Drew Bagnell
ICML 2011 Generalized Boosting Algorithms for Convex Optimization Alexander Grubb, Drew Bagnell
AISTATS 2010 Efficient Reductions for Imitation Learning Stephane Ross, Drew Bagnell
NeurIPS 2005 On Local Rewards and Scaling Distributed Reinforcement Learning Drew Bagnell, Andrew Y. Ng