RL-CD: Dealing with Non-Stationarity in Reinforcement Learning

Abstract

This student abstract describes ongoing investigations regarding an approach for dealing with non-stationarity in reinforcement learning (RL) problems. We briefly propose and describe a method for managing multiple partial models of the environment and comment previous results which show that the proposed mechanism has better convergence times comparing to standard RL algorithms. Current efforts include the development of a more robust approach, capable of dealing with noisy environments, and also investigations regarding the possibility of using partial models in order to aliviate learning problems in systems with an explosive number of states.

Cite

Text

da Silva et al. "RL-CD: Dealing with Non-Stationarity in Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2006.

Markdown

[da Silva et al. "RL-CD: Dealing with Non-Stationarity in Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2006.](https://mlanthology.org/aaai/2006/dasilva2006aaai-rl/)

BibTeX

@inproceedings{dasilva2006aaai-rl,
  title     = {{RL-CD: Dealing with Non-Stationarity in Reinforcement Learning}},
  author    = {da Silva, Bruno Castro and Basso, Eduardo W. and Bazzan, Ana L. C. and Engel, Paulo Martins},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2006},
  pages     = {1863-1864},
  url       = {https://mlanthology.org/aaai/2006/dasilva2006aaai-rl/}
}