Quantized Representations Prevent Dimensional Collapse in Self-Predictive RL
Abstract
Learning representations for reinforcement learning (RL) has shown much promise for continuous control. We propose an efficient representation learning method using only a self-supervised latent-state consistency loss. Our approach employs an encoder and a dynamics model to map observations to latent states and predict future latent states, respectively. We achieve high performance and prevent dimensional collapse by quantizing the latent representation such that the rank of the representation is empirically preserved. Our method, named iQRL: implicitly Quantized Reinforcement Learning, is straightforward, compatible with any model-free RL algorithm, and demonstrates excellent performance by outperforming other recently proposed representation learning methods in continuous control benchmarks from DeepMind Control Suite.
Cite
Text
Scannell et al. "Quantized Representations Prevent Dimensional Collapse in Self-Predictive RL." ICML 2024 Workshops: ARLET, 2024.Markdown
[Scannell et al. "Quantized Representations Prevent Dimensional Collapse in Self-Predictive RL." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/scannell2024icmlw-quantized/)BibTeX
@inproceedings{scannell2024icmlw-quantized,
title = {{Quantized Representations Prevent Dimensional Collapse in Self-Predictive RL}},
author = {Scannell, Aidan and Kujanpää, Kalle and Zhao, Yi and Nakhaeinezhadfard, Mohammadreza and Solin, Arno and Pajarinen, Joni},
booktitle = {ICML 2024 Workshops: ARLET},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/scannell2024icmlw-quantized/}
}