Switching the Loss Reduces the Cost in Batch Reinforcement Learning

Abstract

We propose training fitted Q-iteration with log-loss (FQI-LOG) for batch reinforcement learning (RL). We show that the number of samples needed to learn a near-optimal policy with FQI-LOG scales with the accumulated cost of the optimal policy, which is zero in problems where acting optimally achieves the goal and incurs no cost. In doing so, we provide a general framework for proving small-cost bounds, i.e. bounds that scale with the optimal achievable cost, in batch RL. Moreover, we empirically verify that FQI-LOG uses fewer samples than FQI trained with squared loss on problems where the optimal policy reliably achieves the goal.

Cite

Text

Ayoub et al. "Switching the Loss Reduces the Cost in Batch Reinforcement Learning." International Conference on Machine Learning, 2024.

Markdown

[Ayoub et al. "Switching the Loss Reduces the Cost in Batch Reinforcement Learning." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/ayoub2024icml-switching/)

BibTeX

@inproceedings{ayoub2024icml-switching,
  title     = {{Switching the Loss Reduces the Cost in Batch Reinforcement Learning}},
  author    = {Ayoub, Alex and Wang, Kaiwen and Liu, Vincent and Robertson, Samuel and Mcinerney, James and Liang, Dawen and Kallus, Nathan and Szepesvari, Csaba},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {2135-2158},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/ayoub2024icml-switching/}
}