Empirical Evaluation Methods for Multiobjective Reinforcement Learning Algorithms
Abstract
While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms.
Cite
Text
Vamplew et al. "Empirical Evaluation Methods for Multiobjective Reinforcement Learning Algorithms." Machine Learning, 2011. doi:10.1007/S10994-010-5232-5Markdown
[Vamplew et al. "Empirical Evaluation Methods for Multiobjective Reinforcement Learning Algorithms." Machine Learning, 2011.](https://mlanthology.org/mlj/2011/vamplew2011mlj-empirical/) doi:10.1007/S10994-010-5232-5BibTeX
@article{vamplew2011mlj-empirical,
title = {{Empirical Evaluation Methods for Multiobjective Reinforcement Learning Algorithms}},
author = {Vamplew, Peter and Dazeley, Richard and Berry, Adam and Issabekov, Rustam and Dekker, Evan},
journal = {Machine Learning},
year = {2011},
pages = {51-80},
doi = {10.1007/S10994-010-5232-5},
volume = {84},
url = {https://mlanthology.org/mlj/2011/vamplew2011mlj-empirical/}
}