KLD-Sampling: Adaptive Particle Filters

Abstract

Over the last years, particle filters have been applied with great success to a variety of state estimation problems. We present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets on-the-fly. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

Cite

Text

Fox. "KLD-Sampling: Adaptive Particle Filters." Neural Information Processing Systems, 2001.

Markdown

[Fox. "KLD-Sampling: Adaptive Particle Filters." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/fox2001neurips-kldsampling/)

BibTeX

@inproceedings{fox2001neurips-kldsampling,
  title     = {{KLD-Sampling: Adaptive Particle Filters}},
  author    = {Fox, Dieter},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {713-720},
  url       = {https://mlanthology.org/neurips/2001/fox2001neurips-kldsampling/}
}