Deep Ordinal Reinforcement Learning

Abstract

Reinforcement learning usually makes use of numerical rewards, which have nice properties but also come with drawbacks and difficulties. Using rewards on an ordinal scale (ordinal rewards) is an alternative to numerical rewards that has received more attention in recent years. In this paper, a general approach to adapting reinforcement learning problems to the use of ordinal rewards is presented and motivated. We show how to convert common reinforcement learning algorithms to an ordinal variation by the example of Q-learning and introduce Ordinal Deep Q-Networks, which adapt deep reinforcement learning to ordinal rewards. Additionally, we run evaluations on problems provided by the OpenAI Gym framework, showing that our ordinal variants exhibit a performance that is comparable to the numerical variations for a number of problems. We also give first evidence that our ordinal variant is able to produce better results for problems with less engineered and simpler-to-design reward signals.

Cite

Text

Zap et al. "Deep Ordinal Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2019. doi:10.1007/978-3-030-46133-1_1

Markdown

[Zap et al. "Deep Ordinal Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2019.](https://mlanthology.org/ecmlpkdd/2019/zap2019ecmlpkdd-deep/) doi:10.1007/978-3-030-46133-1_1

BibTeX

@inproceedings{zap2019ecmlpkdd-deep,
  title     = {{Deep Ordinal Reinforcement Learning}},
  author    = {Zap, Alexander and Joppen, Tobias and Fürnkranz, Johannes},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2019},
  pages     = {3-18},
  doi       = {10.1007/978-3-030-46133-1_1},
  url       = {https://mlanthology.org/ecmlpkdd/2019/zap2019ecmlpkdd-deep/}
}