Konidaris, George

53 publications

AAAI 2025 Discovering Options That Minimize Average Planning Time Alexander Ivanov, Akhil Bagaria, George Konidaris
ICLR 2025 Geometry of Neural Reinforcement Learning in Continuous State and Action Spaces Saket Tiwari, Omer Gottesman, George Konidaris
ICML 2025 Knowledge Retention in Continual Model-Based Reinforcement Learning Haotian Fu, Yixiang Sun, Michael Littman, George Konidaris
NeurIPS 2025 Learning Parameterized Skills from Demonstrations Vedant Gupta, Haotian Fu, Calvin Luo, Yiding Jiang, George Konidaris
NeurIPS 2025 Skill-Driven Neurosymbolic State Abstractions Alper Ahmetoglu, Steven James, Cameron Allen, Sam Lobel, David Abel, George Konidaris
NeurIPSW 2024 Knowledge Retention in Continual Model-Based Reinforcement Learning Haotian Fu, Yixiang Sun, Michael Littman, George Konidaris
ICML 2024 Language-Guided Skill Learning with Temporal Variational Inference Haotian Fu, Pratyusha Sharma, Elias Stengel-Eskin, George Konidaris, Nicolas Le Roux, Marc-Alexandre Côté, Xingdi Yuan
ICLRW 2024 Language-Guided Skill Learning with Temporal Variational Inference Haotian Fu, Pratyusha Sharma, Elias Stengel-Eskin, George Konidaris, Nicolas Le Roux, Marc-Alexandre Côté, Xingdi Yuan
NeurIPS 2024 Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy Cameron Allen, Aaron Kirtland, Ruo Yu Tao, Sam Lobel, Daniel Scott, Nicholas Petrocelli, Omer Gottesman, Ronald Parr, Michael L. Littman, George Konidaris
ICMLW 2024 Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy Cameron Allen, Aaron T. Kirtland, Ruo Yu Tao, Sam Lobel, Daniel Scott, Nicholas Petrocelli, Omer Gottesman, Ronald Parr, Michael Littman, George Konidaris
ICML 2024 Model-Based Reinforcement Learning for Parameterized Action Spaces Renhao Zhang, Haotian Fu, Yilin Miao, George Konidaris
AISTATS 2023 Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces Omer Gottesman, Kavosh Asadi, Cameron S. Allen, Samuel Lobel, George Konidaris, Michael Littman
NeurIPS 2023 Effectively Learning Initiation Sets in Hierarchical Reinforcement Learning Akhil Bagaria, Ben Abbatematteo, Omer Gottesman, Matt Corsaro, Sreehari Rammohan, George Konidaris
NeurIPSW 2023 Exploiting Contextual Structure to Generate Useful Auxiliary Tasks Benedict Quartey, Ankit Shah, George Konidaris
ICML 2023 Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning Sam Lobel, Akhil Bagaria, George Konidaris
ICMLW 2023 Guided Policy Search for Parameterized Skills Using Adverbs Benjamin Adin Spiegel, George Konidaris
NeurIPSW 2023 Hierarchical Empowerment: Toward Tractable Empowerment-Based Skill Learning Andrew Levy, Sreehari Rammohan, Alessandro Allievi, Scott Niekum, George Konidaris
NeurIPSW 2023 Learning Abstract World Models for Value-Preserving Planning with Options Rafael Rodriguez-Sanchez, George Konidaris
ICML 2023 Meta-Learning Parameterized Skills Haotian Fu, Shangqun Yu, Saket Tiwari, Michael Littman, George Konidaris
ICLR 2023 Performance Bounds for Model and Policy Transfer in Hidden-Parameter MDPs Haotian Fu, Jiayu Yao, Omer Gottesman, Finale Doshi-Velez, George Konidaris
AAAI 2023 Q-Functionals for Value-Based Continuous Control Samuel Lobel, Sreehari Rammohan, Bowen He, Shangqun Yu, George Konidaris
ICML 2023 RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents Rafael Rodriguez-Sanchez, Benjamin Adin Spiegel, Jennifer Wang, Roma Patel, Stefanie Tellex, George Konidaris
CoRL 2023 Synthesizing Navigation Abstractions for Planning with Portable Manipulation Skills Eric Rosen, Steven James, Sergio Orozco, Vedant Gupta, Max Merlin, Stefanie Tellex, George Konidaris
ICLR 2022 Autonomous Learning of Object-Centric Abstractions for High-Level Planning Steven James, Benjamin Rosman, George Konidaris
NeurIPS 2022 Effects of Data Geometry in Early Deep Learning Saket Tiwari, George Konidaris
NeurIPS 2022 Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in Hex Charles Lovering, Jessica Forde, George Konidaris, Ellie Pavlick, Michael L. Littman
NeurIPS 2022 Model-Based Lifelong Reinforcement Learning with Bayesian Exploration Haotian Fu, Shangqun Yu, Michael L. Littman, George Konidaris
AAAI 2022 Optimistic Initialization for Exploration in Continuous Control Sam Lobel, Omer Gottesman, Cameron Allen, Akhil Bagaria, George Konidaris
JMLR 2021 A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms Oliver Kroemer, Scott Niekum, George Konidaris
NeurIPSW 2021 Bayesian Exploration for Lifelong Reinforcement Learning Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris
IJCAI 2021 Efficient Black-Box Planning Using Macro-Actions with Focused Effects Cameron Allen, Michael Katz, Tim Klinger, George Konidaris, Matthew Riemer, Gerald Tesauro
NeurIPS 2021 Learning Markov State Abstractions for Deep Reinforcement Learning Cameron Allen, Neev Parikh, Omer Gottesman, George Konidaris
IJCAI 2021 Robustly Learning Composable Options in Deep Reinforcement Learning Akhil Bagaria, Jason K. Senthil, Matthew Slivinski, George Konidaris
ICML 2021 Skill Discovery for Exploration and Planning Using Deep Skill Graphs Akhil Bagaria, Jason K Senthil, George Konidaris
ICLR 2020 Exploration in Reinforcement Learning with Deep Covering Options Yuu Jinnai, Jee Won Park, Marlos C. Machado, George Konidaris
ICML 2020 Learning Portable Representations for High-Level Planning Steven James, Benjamin Rosman, George Konidaris
ICMLW 2020 On the Relationship Between Structure in Natural Language and Models of Sequential Decision Processes Roma Patel, Rafael Rodriguez-Sanchez, George Konidaris
ICLR 2020 Option Discovery Using Deep Skill Chaining Akhil Bagaria, George Konidaris
ICMLW 2020 Skill Discovery for Exploration and Planning Using Deep Skill Graphs Akhil Bagaria, Jason Crowley, Jing Wei Nicholas Lim, George Konidaris
AAAI 2020 Task Scoping for Efficient Planning in Open Worlds (Student Abstract) Nishanth Kumar, Michael Fishman, Natasha Danas, Stefanie Tellex, Michael Littman, George Konidaris
ICML 2019 Discovering Options for Exploration by Minimizing Cover Time Yuu Jinnai, Jee Won Park, David Abel, George Konidaris
ICML 2019 Finding Options That Minimize Planning Time Yuu Jinnai, David Abel, David Hershkowitz, Michael Littman, George Konidaris
ICLR 2019 Learning Multi-Level Hierarchies with Hindsight Andrew Levy, George Konidaris, Robert Platt, Kate Saenko
CoRL 2019 Learning to Generalize Kinematic Models to Novel Objects Ben Abbatematteo, Stefanie Tellex, George Konidaris
ICML 2018 Policy and Value Transfer in Lifelong Reinforcement Learning David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, Michael Littman
NeurIPS 2017 Active Exploration for Learning Symbolic Representations Garrett Andersen, George Konidaris
NeurIPS 2017 Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes Taylor W Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez
NeurIPS 2015 Policy Evaluation Using the Ω-Return Philip S. Thomas, Scott Niekum, Georgios Theocharous, George Konidaris
ICML 2014 Active Learning of Parameterized Skills Bruno Da Silva, George Konidaris, Andrew Barto
JMLR 2012 Transfer in Reinforcement Learning via Shared Features George Konidaris, Ilya Scheidwasser, Andrew Barto
NeurIPS 2011 TD_gamma: Re-Evaluating Complex Backups in Temporal Difference Learning George Konidaris, Scott Niekum, Philip S. Thomas
NeurIPS 2010 Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories George Konidaris, Scott Kuindersma, Roderic Grupen, Andrew G. Barto
NeurIPS 2009 Skill Discovery in Continuous Reinforcement Learning Domains Using Skill Chaining George Konidaris, Andrew G. Barto