Abel, David

28 publications

ICLR 2025 A Black Swan Hypothesis: The Role of Human Irrationality in AI Safety Hyunin Lee, Chanwoo Park, David Abel, Ming Jin
NeurIPS 2025 Enhancing Tactile-Based Reinforcement Learning for Robotic Control Elle Miller, Trevor McInroe, David Abel, Oisin Mac Aodha, Sethu Vijayakumar
ICML 2025 General Agents Need World Models Jonathan Richens, Tom Everitt, David Abel
JMLR 2025 Optimizing Return Distributions with Distributional Dynamic Programming Bernardo Ávila Pires, Mark Rowland, Diana Borsa, Zhaohan Daniel Guo, Khimya Khetarpal, André Barreto, David Abel, Rémi Munos, Will Dabney
NeurIPS 2025 Plasticity as the Mirror of Empowerment David Abel, Michael Bowling, Andre Barreto, Will Dabney, Shi Dong, Steven Stenberg Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh
NeurIPS 2025 Skill-Driven Neurosymbolic State Abstractions Alper Ahmetoglu, Steven James, Cameron Allen, Sam Lobel, David Abel, George Konidaris
ICLR 2025 Studying the Interplay Between the Actor and Critic Representations in Reinforcement Learning Samuel Garcin, Trevor McInroe, Pablo Samuel Castro, Christopher G. Lucas, David Abel, Prakash Panangaden, Stefano V Albrecht
ICML 2024 Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input Andi Peng, Yuying Sun, Tianmin Shu, David Abel
NeurIPS 2023 A Definition of Continual Reinforcement Learning David Abel, Andre Barreto, Benjamin Van Roy, Doina Precup, Hado P van Hasselt, Satinder P. Singh
ICML 2023 Settling the Reward Hypothesis Michael Bowling, John D Martin, David Abel, Will Dabney
CoLLAs 2022 Meta-Gradients in Non-Stationary Environments Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, Satinder Singh
ICLRW 2022 Meta-Gradients in Non-Stationary Environments Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, Satinder Singh
IJCAI 2022 On the Expressivity of Markov Reward (Extended Abstract) David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh
AAAI 2021 Lipschitz Lifelong Reinforcement Learning Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman
NeurIPS 2021 On the Expressivity of Markov Reward David Abel, Will Dabney, Anna Harutyunyan, Mark K Ho, Michael L. Littman, Doina Precup, Satinder P. Singh
ICML 2021 Revisiting Peng’s Q($λ$) for Modern Reinforcement Learning Tadashi Kozuno, Yunhao Tang, Mark Rowland, Remi Munos, Steven Kapturowski, Will Dabney, Michal Valko, David Abel
AAAI 2020 People Do Not Just Plan, They Plan to Plan Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths
AISTATS 2020 Value Preserving State-Action Abstractions David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael Littman
ICML 2020 What Can I Do Here? a Theory of Affordances in Reinforcement Learning Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, David Abel, Doina Precup
AAAI 2019 A Theory of State Abstraction for Reinforcement Learning David Abel
ICML 2019 Discovering Options for Exploration by Minimizing Cover Time Yuu Jinnai, Jee Won Park, David Abel, George Konidaris
ICML 2019 Finding Options That Minimize Planning Time Yuu Jinnai, David Abel, David Hershkowitz, Michael Littman, George Konidaris
AAAI 2019 State Abstraction as Compression in Apprenticeship Learning David Abel, Dilip Arumugam, Kavosh Asadi, Yuu Jinnai, Michael L. Littman, Lawson L. S. Wong
IJCAI 2019 The Expected-Length Model of Options David Abel, John Winder, Marie desJardins, Michael L. Littman
AAAI 2018 Bandit-Based Solar Panel Control David Abel, Edward C. Williams, Stephen Brawner, Emily Reif, Michael L. Littman
ICML 2018 Policy and Value Transfer in Lifelong Reinforcement Learning David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, Michael Littman
ICML 2018 State Abstractions for Lifelong Reinforcement Learning David Abel, Dilip Arumugam, Lucas Lehnert, Michael Littman
ICML 2016 Near Optimal Behavior via Approximate State Abstraction David Abel, David Hershkowitz, Michael Littman