Improving Predictive State Representations via Gradient Descent

Abstract

Predictive state representations (PSRs) model dynamical systems using appropriately chosen predictions about future observations as a representation of the current state. In contrast to the hidden states posited by HMMs or RNNs, PSR states are directly observable in the training data; this gives rise to a moment-matching spectral algorithm for learning PSRs that is computationally efficient and statistically consistent when the model complexity matches that of the true system generating the data. In practice, however, model mismatch is inevitable and while spectral learning remains appealingly fast and simple it may fail to find optimal models. To address this problem, we investigate the use of gradient methods for improving spectrally-learned PSRs. We show that only a small amount of additional gradient optimization can lead to significant performance gains, and moreover that initializing gradient methods with the spectral learning solution yields better models in significantly less time than starting from scratch.

Cite

Text

Jiang et al. "Improving Predictive State Representations via Gradient Descent." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10270

Markdown

[Jiang et al. "Improving Predictive State Representations via Gradient Descent." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/jiang2016aaai-improving/) doi:10.1609/AAAI.V30I1.10270

BibTeX

@inproceedings{jiang2016aaai-improving,
  title     = {{Improving Predictive State Representations via Gradient Descent}},
  author    = {Jiang, Nan and Kulesza, Alex and Singh, Satinder},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {1709-1715},
  doi       = {10.1609/AAAI.V30I1.10270},
  url       = {https://mlanthology.org/aaai/2016/jiang2016aaai-improving/}
}