Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping

Abstract

We present a new algorithm, Prioritized Sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as Temporal Differencing and Q-Iearning have fast real time perfor(cid:173) mance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized Sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize impor(cid:173) tant dynamic programming sweeps and to guide the exploration of state(cid:173) space. We compare Prioritized Sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real time problems with which other methods have difficulty.

Cite

Text

Moore and Atkeson. "Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping." Neural Information Processing Systems, 1992.

Markdown

[Moore and Atkeson. "Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/moore1992neurips-memorybased/)

BibTeX

@inproceedings{moore1992neurips-memorybased,
  title     = {{Memory-Based Reinforcement Learning: Efficient Computation with Prioritized Sweeping}},
  author    = {Moore, Andrew W. and Atkeson, Christopher G.},
  booktitle = {Neural Information Processing Systems},
  year      = {1992},
  pages     = {263-270},
  url       = {https://mlanthology.org/neurips/1992/moore1992neurips-memorybased/}
}