Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning

Abstract

Reinforcement Learning (RL) in complex environments presents many challenges: agents require learning concise representations of both environments and behaviors for efficient reasoning and generalizing experiences to new, unseen situations. However, RL approaches can be sample-inefficient and difficult to scale, especially in long-horizon sparse reward settings. To address these issues, the goal of my doctoral research is to develop methods that automatically construct semantically meaningful state and temporal abstractions for efficient transfer and generalization. In my work, I develop hierarchical approaches for learning transferable, generalizable knowledge in the form of symbolically represented options, as well as for integrating search techniques with RL to solve new problems by efficiently composing the learned options. Empirical results show that the resulting approaches effectively learn and transfer knowledge, achieving superior sample efficiency compared to SOTA methods while also enhancing interpretability.

Cite

Text

Nayyar. "Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30402

Markdown

[Nayyar. "Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/nayyar2024aaai-learning/) doi:10.1609/AAAI.V38I21.30402

BibTeX

@inproceedings{nayyar2024aaai-learning,
  title     = {{Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning}},
  author    = {Nayyar, Rashmeet Kaur},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23403-23404},
  doi       = {10.1609/AAAI.V38I21.30402},
  url       = {https://mlanthology.org/aaai/2024/nayyar2024aaai-learning/}
}