Skill-Driven Neurosymbolic State Abstractions

Abstract

We consider how to construct state abstractions compatible with a given set of abstract actions, to obtain a well-formed abstract Markov decision process (MDP). We show that the Bellman equation suggests that abstract states should represent distributions over states in the ground MDP; we characterize the conditions under which the resulting process is Markov and approximately model-preserving, derive algorithms for constructing and planning with the abstract MDP, and apply them to a visual maze task. We generalize these results to the factored actions case, characterizing the conditions that result in factored abstract states and apply the resulting algorithm to Montezuma's Revenge. These results provide a powerful and principled framework for constructing neurosymbolic abstract Markov decision processes.

Cite

Text

Ahmetoglu et al. "Skill-Driven Neurosymbolic State Abstractions." Advances in Neural Information Processing Systems, 2025.

Markdown

[Ahmetoglu et al. "Skill-Driven Neurosymbolic State Abstractions." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/ahmetoglu2025neurips-skilldriven/)

BibTeX

@inproceedings{ahmetoglu2025neurips-skilldriven,
  title     = {{Skill-Driven Neurosymbolic State Abstractions}},
  author    = {Ahmetoglu, Alper and James, Steven and Allen, Cameron and Lobel, Sam and Abel, David and Konidaris, George},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/ahmetoglu2025neurips-skilldriven/}
}