Adversarial Training of Neural Encoding Models on Population Spike Trains

Abstract

Neural population responses to sensory stimuli can exhibit both nonlinear stimulus- dependence and richly structured shared variability. Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data. To account for the discrete nature of neural spike trains, we use the REBAR method to estimate unbiased gradients for adversarial optimization of neural encoding models. We illustrate our approach on population recordings from primary visual cortex. We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.

Cite

Text

Ramesh et al. "Adversarial Training of Neural Encoding Models on Population Spike Trains." NeurIPS 2019 Workshops: Neuro_AI, 2019.

Markdown

[Ramesh et al. "Adversarial Training of Neural Encoding Models on Population Spike Trains." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/ramesh2019neuripsw-adversarial/)

BibTeX

@inproceedings{ramesh2019neuripsw-adversarial,
  title     = {{Adversarial Training of Neural Encoding Models on Population Spike Trains}},
  author    = {Ramesh, Poornima and Atayi, Mohamad and Macke, Jakob H},
  booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/ramesh2019neuripsw-adversarial/}
}