Linear Dynamical Neural Population Models Through Nonlinear Embeddings
Abstract
A body of recent work in modeling neural activity focuses on recovering low- dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra flexibility allows the model to capture a richer set of neural variability than a purely linear model, but retains an easily visualizable low-dimensional latent space. To fit this class of non-conjugate models we propose a variational inference scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to state-of-the-art neural population models, fLDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability.
Cite
Text
Gao et al. "Linear Dynamical Neural Population Models Through Nonlinear Embeddings." Neural Information Processing Systems, 2016.Markdown
[Gao et al. "Linear Dynamical Neural Population Models Through Nonlinear Embeddings." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/gao2016neurips-linear/)BibTeX
@inproceedings{gao2016neurips-linear,
title = {{Linear Dynamical Neural Population Models Through Nonlinear Embeddings}},
author = {Gao, Yuanjun and Archer, Evan W and Paninski, Liam and Cunningham, John P.},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {163-171},
url = {https://mlanthology.org/neurips/2016/gao2016neurips-linear/}
}