Learning Disentangled Behavior Embeddings

Abstract

To understand the relationship between behavior and neural activity, experiments in neuroscience often include an animal performing a repeated behavior such as a motor task. Recent progress in computer vision and deep learning has shown great potential in the automated analysis of behavior by leveraging large and high-quality video datasets. In this paper, we design Disentangled Behavior Embedding (DBE) to learn robust behavioral embeddings from unlabeled, multi-view, high-resolution behavioral videos across different animals and multiple sessions. We further combine DBE with a stochastic temporal model to propose Variational Disentangled Behavior Embedding (VDBE), an end-to-end approach that learns meaningful discrete behavior representations and generates interpretable behavioral videos. Our models learn consistent behavior representations by explicitly disentangling the dynamic behavioral factors (pose) from time-invariant, non-behavioral nuisance factors (context) in a deep autoencoder, and exploit the temporal structures of pose dynamics. Compared to competing approaches, DBE and VDBE enjoy superior performance on downstream tasks such as fine-grained behavioral motif generation and behavior decoding.

Cite

Text

Shi et al. "Learning Disentangled Behavior Embeddings." Neural Information Processing Systems, 2021.

Markdown

[Shi et al. "Learning Disentangled Behavior Embeddings." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/shi2021neurips-learning/)

BibTeX

@inproceedings{shi2021neurips-learning,
  title     = {{Learning Disentangled Behavior Embeddings}},
  author    = {Shi, Changhao and Schwartz, Sivan and Levy, Shahar and Achvat, Shay and Abboud, Maisan and Ghanayim, Amir and Schiller, Jackie and Mishne, Gal},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/shi2021neurips-learning/}
}