Kalman Filter Control Embedded into the Reinforcement Learning Framework

Abstract

There is a growing interest in using Kalman filter models in brain modeling. The question arises whether Kalman filter models can be used on-line not only for estimation but for control. The usual method of optimal control of Kalman filter makes use of off-line backward recursion, which is not satisfactory for this purpose. Here, it is shown that a slight modification of the linear-quadratic-gaussian Kalman filter model allows the on-line estimation of optimal control by using reinforcement learning and overcomes this difficulty. Moreover, the emerging learning rule for value estimation exhibits a Hebbian form, which is weighted by the error of the value estimation.

Cite

Text

Szita and Lörincz. "Kalman Filter Control Embedded into the Reinforcement Learning Framework." Neural Computation, 2004. doi:10.1162/089976604772744884

Markdown

[Szita and Lörincz. "Kalman Filter Control Embedded into the Reinforcement Learning Framework." Neural Computation, 2004.](https://mlanthology.org/neco/2004/szita2004neco-kalman/) doi:10.1162/089976604772744884

BibTeX

@article{szita2004neco-kalman,
  title     = {{Kalman Filter Control Embedded into the Reinforcement Learning Framework}},
  author    = {Szita, Istvan and Lörincz, András},
  journal   = {Neural Computation},
  year      = {2004},
  pages     = {491-499},
  doi       = {10.1162/089976604772744884},
  volume    = {16},
  url       = {https://mlanthology.org/neco/2004/szita2004neco-kalman/}
}