Strategy Evaluation in Extensive Games with Importance Sampling

Abstract

Typically agent evaluation is done through Monte Carlo estimation. However, stochastic agent decisions and stochastic outcomes can make this approach inefficient, requiring many samples for an accurate estimate. We present a new technique that can be used to simultaneously evaluate many strategies while playing a single strategy in the context of an extensive game. This technique is based on importance sampling, but utilizes two new mechanisms for significantly reducing variance in the estimates. We demonstrate its effectiveness in the domain of poker, where stochasticity makes traditional evaluation problematic.

Cite

Text

Bowling et al. "Strategy Evaluation in Extensive Games with Importance Sampling." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390166

Markdown

[Bowling et al. "Strategy Evaluation in Extensive Games with Importance Sampling." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/bowling2008icml-strategy/) doi:10.1145/1390156.1390166

BibTeX

@inproceedings{bowling2008icml-strategy,
  title     = {{Strategy Evaluation in Extensive Games with Importance Sampling}},
  author    = {Bowling, Michael H. and Johanson, Michael and Burch, Neil and Szafron, Duane},
  booktitle = {International Conference on Machine Learning},
  year      = {2008},
  pages     = {72-79},
  doi       = {10.1145/1390156.1390166},
  url       = {https://mlanthology.org/icml/2008/bowling2008icml-strategy/}
}