Implicit Two-Tower Policies

Abstract

We present a new class of structured reinforcement learning policy-architectures, Implicit Two-Tower (ITT) policies, where the actions are chosen based on the attention scores of their learnable latent representations with those of the input states. By explicitly disentangling action from state processing in the policy stack, we achieve two main goals: substantial computational gains and better performance. Our architectures are compatible with both: discrete and continuous action spaces. By conducting tests on 15 environments from OpenAI Gym and DeepMind Control Suite, we show that ITT-architectures are particularly suited for blackbox/evolutionary optimization and the corresponding policy training algorithms outperform their vanilla unstructured implicit counterparts as well as commonly used explicit policies. We complement our analysis by showing how techniques such as hashing and lazy tower updates, critically relying on the two-tower structure of ITTs, can be applied to obtain additional computational improvements.

Cite

Text

Zhao et al. "Implicit Two-Tower Policies." ICLR 2024 Workshops: PML4LRS, 2024.

Markdown

[Zhao et al. "Implicit Two-Tower Policies." ICLR 2024 Workshops: PML4LRS, 2024.](https://mlanthology.org/iclrw/2024/zhao2024iclrw-implicit/)

BibTeX

@inproceedings{zhao2024iclrw-implicit,
  title     = {{Implicit Two-Tower Policies}},
  author    = {Zhao, Yunfan and Pan, Alvin and Choromanski, Krzysztof Marcin and Jain, Deepali and Sindhwani, Vikas},
  booktitle = {ICLR 2024 Workshops: PML4LRS},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/zhao2024iclrw-implicit/}
}