Vision in Action: Learning Active Perception from Human Demonstrations

Abstract

We present Vision in Action (ViA), an active perception system for bimanual robot manipulation. ViA learns task-relevant active perceptual strategies (e.g., searching, tracking, and focusing) directly from human demonstrations. On the hardware side, ViA employs a simple yet effective 6-DoF robotic neck to enable flexible, human-like head movements. To capture human active perception strategies, we design a VR-based teleoperation interface that creates a shared observation space between the robot and the human operator. To mitigate VR motion sickness caused by latency in the robot’s physical movements, the interface uses an intermediate 3D scene representation, enabling real-time view rendering on the operator side while asynchronously updating the scene with the robot’s latest observations. Together, these design elements enable the learning of robust visuomotor policies for three complex, multi-stage bimanual manipulation tasks involving visual occlusions, significantly outperforming baseline systems.

Cite

Text

Xiong et al. "Vision in Action: Learning Active Perception from Human Demonstrations." Proceedings of The 9th Conference on Robot Learning, 2025.

Markdown

[Xiong et al. "Vision in Action: Learning Active Perception from Human Demonstrations." Proceedings of The 9th Conference on Robot Learning, 2025.](https://mlanthology.org/corl/2025/xiong2025corl-vision/)

BibTeX

@inproceedings{xiong2025corl-vision,
  title     = {{Vision in Action: Learning Active Perception from Human Demonstrations}},
  author    = {Xiong, Haoyu and Xu, Xiaomeng and Wu, Jimmy and Hou, Yifan and Bohg, Jeannette and Song, Shuran},
  booktitle = {Proceedings of The 9th Conference on Robot Learning},
  year      = {2025},
  pages     = {5450-5463},
  volume    = {305},
  url       = {https://mlanthology.org/corl/2025/xiong2025corl-vision/}
}