Grounding State Representations in Sensory Experience for Reasoning and Planning by Mobile Robots
Abstract
We are addressing the problem of learning probabilis-tic models of the interaction between a mobile robot and its environment and using these models for task planning. This requires modifying the state-of-the-art reinforcement learning algorithms to deal with hidden state and high-dimensional observation spaces of con-tinuous variables. Our approach is to identify hidden states by means of the trajectories leading into and out of them, and perform clustering in this embedding tra-jectory space in order to compile a partially observable Markov decision process (POMDP) model, which can be used for approximate decision-theoretic planning. The ultimate objective of our work is to develop algo-rithms that learn POMDP models with discrete hidden
Cite
Text
Nikovski. "Grounding State Representations in Sensory Experience for Reasoning and Planning by Mobile Robots." AAAI Conference on Artificial Intelligence, 2000.Markdown
[Nikovski. "Grounding State Representations in Sensory Experience for Reasoning and Planning by Mobile Robots." AAAI Conference on Artificial Intelligence, 2000.](https://mlanthology.org/aaai/2000/nikovski2000aaai-grounding/)BibTeX
@inproceedings{nikovski2000aaai-grounding,
title = {{Grounding State Representations in Sensory Experience for Reasoning and Planning by Mobile Robots}},
author = {Nikovski, Daniel},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2000},
pages = {1108},
url = {https://mlanthology.org/aaai/2000/nikovski2000aaai-grounding/}
}