Contrastive Representations Make Planning Easy

Abstract

Probabilistic inference over time series data is challenging when observations are high-dimensional. In this paper, we show how inference questions relating to prediction and planning can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By first extending this analysis to show that the marginal distribution over representations is Gaussian, we can then prove that conditional distribution of future representations is also Gaussian. Taken together, these results show that a variant of temporal contrastive learning results in representations distributed according to a Gaussian Markov chain, a graphical model where inference (e.g., filtering, smoothing) has closed form solutions. For example, in one special case the problem of trajectory inference simply corresponds to linear interpolation of the initial and final state representations. We provide brief empirical results validating our theory.

Cite

Text

Eysenbach et al. "Contrastive Representations Make Planning Easy." NeurIPS 2023 Workshops: GenPlan, 2023.

Markdown

[Eysenbach et al. "Contrastive Representations Make Planning Easy." NeurIPS 2023 Workshops: GenPlan, 2023.](https://mlanthology.org/neuripsw/2023/eysenbach2023neuripsw-contrastive/)

BibTeX

@inproceedings{eysenbach2023neuripsw-contrastive,
  title     = {{Contrastive Representations Make Planning Easy}},
  author    = {Eysenbach, Benjamin and Myers, Vivek and Levine, Sergey and Salakhutdinov, Ruslan},
  booktitle = {NeurIPS 2023 Workshops: GenPlan},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/eysenbach2023neuripsw-contrastive/}
}