Coarse-to-Fine Q-Attention: Efficient Learning for Visual Robotic Manipulation via Discretisation
Abstract
We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient actor-critic methods in continuous robotics domains. This approach builds on the recently released ARM algorithm, which replaces the continuous next-best pose agent with a discrete one, with coarse-to-fine Q-attention. Given a voxelised scene, coarse-to-fine Q-attention learns what part of the scene to 'zoom' into. When this 'zooming' behaviour is applied iteratively, it results in a near-lossless discretisation of the translation space, and allows the use of a discrete action, deep Q-learning method. We show that our new coarse-to-fine algorithm achieves state-of-the-art performance on several difficult sparsely rewarded RLBench vision-based robotics tasks, and can train real-world policies, tabula rasa, in a matter of minutes, with as little as 3 demonstrations.
Cite
Text
James et al. "Coarse-to-Fine Q-Attention: Efficient Learning for Visual Robotic Manipulation via Discretisation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01337Markdown
[James et al. "Coarse-to-Fine Q-Attention: Efficient Learning for Visual Robotic Manipulation via Discretisation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/james2022cvpr-coarsetofine/) doi:10.1109/CVPR52688.2022.01337BibTeX
@inproceedings{james2022cvpr-coarsetofine,
title = {{Coarse-to-Fine Q-Attention: Efficient Learning for Visual Robotic Manipulation via Discretisation}},
author = {James, Stephen and Wada, Kentaro and Laidlow, Tristan and Davison, Andrew J.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {13739-13748},
doi = {10.1109/CVPR52688.2022.01337},
url = {https://mlanthology.org/cvpr/2022/james2022cvpr-coarsetofine/}
}