State Similarity Based Approach for Improving Performance in RL

Abstract

This paper employs state similarity to improve reinforcement learning performance. This is achieved by first identifying states with similar sub-policies. Then, a tree is constructed to be used for locating common action sequences of states as derived from possible optimal policies. Such sequences are utilized for defining a similarity function between states, which is essential for reflecting updates on the action-value function of a state onto all similar states. As a result, the experience acquired during learning can be applied to a broader context. Effectiveness of the method is demonstrated empirically.

Cite

Text

Girgin et al. "State Similarity Based Approach for Improving Performance in RL." International Joint Conference on Artificial Intelligence, 2007.

Markdown

[Girgin et al. "State Similarity Based Approach for Improving Performance in RL." International Joint Conference on Artificial Intelligence, 2007.](https://mlanthology.org/ijcai/2007/girgin2007ijcai-state/)

BibTeX

@inproceedings{girgin2007ijcai-state,
  title     = {{State Similarity Based Approach for Improving Performance in RL}},
  author    = {Girgin, Sertan and Polat, Faruk and Alhajj, Reda},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2007},
  pages     = {817-822},
  url       = {https://mlanthology.org/ijcai/2007/girgin2007ijcai-state/}
}