Learning to Look by Self-Prediction
Abstract
We present a method for learning active vision skills, for moving the camera to observe a robot's sensors from informative points of view, without external rewards or labels. We do this by jointly training a visual predictor network, which predicts future returns of the sensors using pixels, and a camera control agent, which we reward using the negative error of the predictor. The agent thus moves the camera to points of view that are most predictive for a target sensor, which we select using a conditioning input to the agent. We show that despite this noisy learned reward function, the learned policies are competent, avoid occlusions, and precisely frame the sensor to a specific location in the view, which we call an emergent fovea. We find that replacing the conventional camera with a foveal camera further increases the policies' precision.
Cite
Text
Grimes et al. "Learning to Look by Self-Prediction." NeurIPS 2022 Workshops: SVRHM, 2022.Markdown
[Grimes et al. "Learning to Look by Self-Prediction." NeurIPS 2022 Workshops: SVRHM, 2022.](https://mlanthology.org/neuripsw/2022/grimes2022neuripsw-learning/)BibTeX
@inproceedings{grimes2022neuripsw-learning,
title = {{Learning to Look by Self-Prediction}},
author = {Grimes, Matthew Koichi and Modayil, Joseph Varughese and Mirowski, Piotr W and Rao, Dushyant and Hadsell, Raia},
booktitle = {NeurIPS 2022 Workshops: SVRHM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/grimes2022neuripsw-learning/}
}