Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems

Abstract

Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics. We can gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamic units. Building on switching linear dynamical systems (SLDS), we develop a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.

Cite

Text

Linderman et al. "Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Linderman et al. "Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/linderman2017aistats-bayesian/)

BibTeX

@inproceedings{linderman2017aistats-bayesian,
  title     = {{Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems}},
  author    = {Linderman, Scott W. and Johnson, Matthew J. and Miller, Andrew C. and Adams, Ryan P. and Blei, David M. and Paninski, Liam},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {914-922},
  url       = {https://mlanthology.org/aistats/2017/linderman2017aistats-bayesian/}
}