The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm
Abstract
In this paper, we consider multi-objective reinforcement learning, which arises in many real-world problems with multiple optimization goals. We approach the problem with a max-min framework focusing on fairness among the multiple goals and develop a relevant theory and a practical model-free algorithm under the max-min framework. The developed theory provides a theoretical advance in multi-objective reinforcement learning, and the proposed algorithm demonstrates a notable performance improvement over existing baseline methods.
Cite
Text
Park et al. "The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm." International Conference on Machine Learning, 2024.Markdown
[Park et al. "The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/park2024icml-maxmin/)BibTeX
@inproceedings{park2024icml-maxmin,
title = {{The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm}},
author = {Park, Giseung and Byeon, Woohyeon and Kim, Seongmin and Havakuk, Elad and Leshem, Amir and Sung, Youngchul},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {39616-39642},
volume = {235},
url = {https://mlanthology.org/icml/2024/park2024icml-maxmin/}
}