Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making

Abstract

The Markov assumption (MA) is fundamental to the empirical validity of reinforcement learning. In this paper, we propose a novel Forward-Backward Learning procedure to test MA in sequential decision making. The proposed test does not assume any parametric form on the joint distribution of the observed data and plays an important role for identifying the optimal policy in high-order Markov decision processes (MDPs) and partially observable MDPs. Theoretically, we establish the validity of our test. Empirically, we apply our test to both synthetic datasets and a real data example from mobile health studies to illustrate its usefulness.

Cite

Text

Shi et al. "Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making." International Conference on Machine Learning, 2020.

Markdown

[Shi et al. "Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/shi2020icml-markov/)

BibTeX

@inproceedings{shi2020icml-markov,
  title     = {{Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making}},
  author    = {Shi, Chengchun and Wan, Runzhe and Song, Rui and Lu, Wenbin and Leng, Ling},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {8807-8817},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/shi2020icml-markov/}
}