Enhancing Tactile-Based Reinforcement Learning for Robotic Control

Abstract

Achieving safe, reliable real-world robotic manipulation requires agents to evolve beyond vision and incorporate tactile sensing to overcome sensory deficits and reliance on idealised state information. Despite its potential, the efficacy of tactile sensing in reinforcement learning (RL) remains inconsistent. We address this by developing self-supervised learning (SSL) methodologies to more effectively harness tactile observations, focusing on a scalable setup of proprioception and sparse binary contacts. We empirically demonstrate that sparse binary tactile signals are critical for dexterity, particularly for interactions that proprioceptive control errors do not register, such as decoupled robot-object motions. Our agents achieve superhuman dexterity in complex contact tasks (ball bouncing and Baoding ball rotation). Furthermore, we find that decoupling the SSL memory from the on-policy memory can improve performance. We release the Robot Tactile Olympiad ($\texttt{RoTO}$) benchmark to standardise and promote future research in tactile-based manipulation. Project page: https://elle-miller.github.io/tactile_rl.

Cite

Text

Miller et al. "Enhancing Tactile-Based Reinforcement Learning for Robotic Control." Advances in Neural Information Processing Systems, 2025.

Markdown

[Miller et al. "Enhancing Tactile-Based Reinforcement Learning for Robotic Control." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/miller2025neurips-enhancing/)

BibTeX

@inproceedings{miller2025neurips-enhancing,
  title     = {{Enhancing Tactile-Based Reinforcement Learning for Robotic Control}},
  author    = {Miller, Elle and McInroe, Trevor and Abel, David and Aodha, Oisin Mac and Vijayakumar, Sethu},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/miller2025neurips-enhancing/}
}