Markov Chain Importance Sampling for Minibatches

Abstract

This study investigates importance sampling under the scheme of minibatch stochastic gradient descent, under which the contributions are twofold. First, theoretically, we develop a neat tilting formula, which can be regarded as a general device for asymptotically optimal importance sampling. Second, practically, guided by the formula, we present an effective algorithm for importance sampling which accounts for the effects of minibatches and leverages the Markovian property of the gradients between iterations. Experiments conducted on artificial data confirm that our algorithm consistently delivers superior performance in terms of variance reduction. Furthermore, experiments carried out on real-world data demonstrate that our method, when paired with relatively straightforward models like multilayer perceptron and convolutional neural networks, outperforms in terms of training loss and testing error.

Cite

Text

Fuh et al. "Markov Chain Importance Sampling for Minibatches." Machine Learning, 2024. doi:10.1007/S10994-023-06489-5

Markdown

[Fuh et al. "Markov Chain Importance Sampling for Minibatches." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/fuh2024mlj-markov/) doi:10.1007/S10994-023-06489-5

BibTeX

@article{fuh2024mlj-markov,
  title     = {{Markov Chain Importance Sampling for Minibatches}},
  author    = {Fuh, Cheng-Der and Wang, Chuan-Ju and Pai, Chen-Hung},
  journal   = {Machine Learning},
  year      = {2024},
  pages     = {789-814},
  doi       = {10.1007/S10994-023-06489-5},
  volume    = {113},
  url       = {https://mlanthology.org/mlj/2024/fuh2024mlj-markov/}
}