Reward Distance Comparisons Under Transition Sparsity
Abstract
Reward comparisons are vital for evaluating differences in agent behaviors induced by a set of reward functions. Most conventional techniques utilize the input reward functions to learn optimized policies, which are then used to compare agent behaviors. However, learning these policies can be computationally expensive and can also raise safety concerns. Direct reward comparison techniques obviate policy learning but suffer from transition sparsity, where only a small subset of transitions are sampled due to data collection challenges and feasibility constraints. Existing state-of-the-art direct reward comparison methods are ill-suited for these sparse conditions since they require high transition coverage, where the majority of transitions from a given coverage distribution are sampled. When this requirement is not satisfied, a distribution mismatch between sampled and expected transitions can occur, leading to significant errors. This paper introduces the Sparsity Resilient Reward Distance (SRRD) pseudometric, designed to eliminate the need for high transition coverage by accommodating diverse sample distributions, which are common under transition sparsity. We provide theoretical justification for SRRD's robustness and conduct experiments to demonstrate its practical efficacy across multiple domains.
Cite
Text
Nyanhongo et al. "Reward Distance Comparisons Under Transition Sparsity." Transactions on Machine Learning Research, 2025.Markdown
[Nyanhongo et al. "Reward Distance Comparisons Under Transition Sparsity." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/nyanhongo2025tmlr-reward/)BibTeX
@article{nyanhongo2025tmlr-reward,
title = {{Reward Distance Comparisons Under Transition Sparsity}},
author = {Nyanhongo, Clement and Henrique, Bruno Miranda and Santos, Eugene},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/nyanhongo2025tmlr-reward/}
}