Toward Optimal Stratification for Stratified Monte-Carlo Integration

Abstract

We consider the problem of adaptive stratified sampling for Monte Carlo integration of a function, given a finite number of function evaluations perturbed by noise. Here we address the problem of adapting simultaneously the number of samples into each stratum and the stratification itself. We show a tradeoff in the size of the partitioning. On the one hand it is important to refine the partition in areas where the observation noise or the function are heterogeneous in order to reduce this variability. But on the other hand, a too refined stratification makes it harder to assign the samples according to a near-optimal (oracle) allocation strategy. In this paper we provide an algorithm \em Monte-Carlo Upper-Lower Confidence Bound that selects online, among a large class of partitions, the partition that provides a near-optimal trade-off, and allocates the samples almost optimally on this partition.

Cite

Text

Carpentier and Munos. "Toward Optimal Stratification for Stratified Monte-Carlo Integration." International Conference on Machine Learning, 2013.

Markdown

[Carpentier and Munos. "Toward Optimal Stratification for Stratified Monte-Carlo Integration." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/carpentier2013icml-optimal/)

BibTeX

@inproceedings{carpentier2013icml-optimal,
  title     = {{Toward Optimal Stratification for Stratified Monte-Carlo Integration}},
  author    = {Carpentier, Alexandra and Munos, Rémi},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {28-36},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/carpentier2013icml-optimal/}
}