Learning Functions in K-DNF from Reinforcement

Abstract

An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are quite inefficient. One strategy is to find restricted classes of action strategies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in κ-DNF. Both connectionist and classical statistics-based algorithms are presented, then compared empirically on three test problems. Modifications and extensions that will allow the algorithms to work in more complex domains are also discussed.

Cite

Text

Kaelbling. "Learning Functions in K-DNF from Reinforcement." International Conference on Machine Learning, 1990. doi:10.1016/B978-1-55860-141-3.50023-7

Markdown

[Kaelbling. "Learning Functions in K-DNF from Reinforcement." International Conference on Machine Learning, 1990.](https://mlanthology.org/icml/1990/kaelbling1990icml-learning/) doi:10.1016/B978-1-55860-141-3.50023-7

BibTeX

@inproceedings{kaelbling1990icml-learning,
  title     = {{Learning Functions in K-DNF from Reinforcement}},
  author    = {Kaelbling, Leslie Pack},
  booktitle = {International Conference on Machine Learning},
  year      = {1990},
  pages     = {162-169},
  doi       = {10.1016/B978-1-55860-141-3.50023-7},
  url       = {https://mlanthology.org/icml/1990/kaelbling1990icml-learning/}
}