Learning Procedural Planning Knowledge in Complex Environments

Abstract

LEARNING PROCEDURAL PLANNING KNOWLEDGE IN COMPLEX ENVIRONMENTS by Douglas John Pearson Chair: John E. Laird In complex, dynamic environments, an agent's knowledge of the environment (its domain knowledge) will rarely be complete and correct. Existing approaches to learning and correcting domain knowledge have focused on either learning procedural knowledge to directly guide execution (e.g. reinforcement learners) or learning declarative planning knowledge (e.g. theory revision systems). Systems that only learn execution knowledge are generally only applicable to small domains. In these domains it is possible to learn an execution policy that covers the entire state space, making planning unnecessary. Conversely, existing approaches to learning declarative planning knowledge are applicable to large domains, but they are limited to simple agents, where actions produce immediate, deterministic effects in fully sensed, noise-free environments, and where there are no exogenous events. This ...

Cite

Text

Pearson. "Learning Procedural Planning Knowledge in Complex Environments." AAAI Conference on Artificial Intelligence, 1996.

Markdown

[Pearson. "Learning Procedural Planning Knowledge in Complex Environments." AAAI Conference on Artificial Intelligence, 1996.](https://mlanthology.org/aaai/1996/pearson1996aaai-learning/)

BibTeX

@inproceedings{pearson1996aaai-learning,
  title     = {{Learning Procedural Planning Knowledge in Complex Environments}},
  author    = {Pearson, Douglas J.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {1996},
  pages     = {1401},
  url       = {https://mlanthology.org/aaai/1996/pearson1996aaai-learning/}
}