Universal Sequential Learning and Decision from Individual Data Sequences

Abstract

Sequential learning and decision algorithms are investigated, with various application areas, under a family of additive loss functions for individual data sequences. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n-1logn, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by finite-state machines (FSM's), is studied. It is shown that Markovian machines with sufficiently long memory exist that are asymptotically nearly as good as any given FSM (deterministic or randomized) for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares (RLS) algorithm.

Cite

Text

Merhav and Feder. "Universal Sequential Learning and Decision from Individual Data Sequences." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130430

Markdown

[Merhav and Feder. "Universal Sequential Learning and Decision from Individual Data Sequences." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/merhav1992colt-universal/) doi:10.1145/130385.130430

BibTeX

@inproceedings{merhav1992colt-universal,
  title     = {{Universal Sequential Learning and Decision from Individual Data Sequences}},
  author    = {Merhav, Neri and Feder, Meir},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1992},
  pages     = {413-427},
  doi       = {10.1145/130385.130430},
  url       = {https://mlanthology.org/colt/1992/merhav1992colt-universal/}
}