AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies
Abstract
In this paper, we propose AimBot, a lightweight visual augmentation technique that provides explicit spatial cues to improve visuomotor policy learning in robotic manipulation. AimBot overlays shooting lines and scope reticles onto multi-view RGB images, offering auxiliary visual guidance that encodes the end-effector’s state. The overlays are computed from depth images, camera extrinsics, and the current end-effector pose, explicitly conveying spatial relationships between the gripper and objects in the scene. AimBot incurs minimal computational overhead (less than 1 ms) and requires no changes to model architectures, as it simply replaces original RGB images with augmented counterparts. Despite its simplicity, our results show that AimBot consistently improves the performance of various visuomotor policies in both simulation and real-world settings, highlighting the benefits of spatially grounded visual feedback. More videos can be found at https://aimbot-reticle.github.io/
Cite
Text
Dai et al. "AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies." Proceedings of The 9th Conference on Robot Learning, 2025.Markdown
[Dai et al. "AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies." Proceedings of The 9th Conference on Robot Learning, 2025.](https://mlanthology.org/corl/2025/dai2025corl-aimbot/)BibTeX
@inproceedings{dai2025corl-aimbot,
title = {{AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies}},
author = {Dai, Yinpei and Lee, Jayjun and Zhang, Yichi and Ma, Ziqiao and Yang, Jianing and Zadeh, Amir and Li, Chuan and Fazeli, Nima and Chai, Joyce},
booktitle = {Proceedings of The 9th Conference on Robot Learning},
year = {2025},
pages = {2409-2429},
volume = {305},
url = {https://mlanthology.org/corl/2025/dai2025corl-aimbot/}
}