Building Predictive Models from Fractal Representations of Symbolic Sequences
Abstract
We propose a novel approach for building finite memory predictive mod(cid:173) els similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by first transforming the n-block structure of the training sequence into a spatial structure of points in a unit hypercube, such that the longer is the common suffix shared by any two n-blocks, the closer lie their point representations. Such a transformation embodies a Markov assumption - n-blocks with long common suffixes are likely to produce similar continuations. Finding a set of prediction contexts is formulated as a resource allocation problem solved by vector quantizing the spatial n-block representation. We compare our model with both the classical and variable memory length Markov models on three data sets with different memory and stochastic components. Our models have a superior performance, yet, their construction is fully automatic, which is shown to be problematic in the case of VLMMs.
Cite
Text
Tiño and Dorffner. "Building Predictive Models from Fractal Representations of Symbolic Sequences." Neural Information Processing Systems, 1999.Markdown
[Tiño and Dorffner. "Building Predictive Models from Fractal Representations of Symbolic Sequences." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/tino1999neurips-building/)BibTeX
@inproceedings{tino1999neurips-building,
title = {{Building Predictive Models from Fractal Representations of Symbolic Sequences}},
author = {Tiño, Peter and Dorffner, Georg},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {645-651},
url = {https://mlanthology.org/neurips/1999/tino1999neurips-building/}
}