Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination
Abstract
This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.
Cite
Text
Zhang et al. "Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.74Markdown
[Zhang et al. "Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/zhang2017cvprw-tuning/) doi:10.1109/CVPRW.2017.74BibTeX
@inproceedings{zhang2017cvprw-tuning,
title = {{Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination}},
author = {Zhang, Fangyi and Leitner, Jürgen and Milford, Michael and Corke, Peter I.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2017},
pages = {496-497},
doi = {10.1109/CVPRW.2017.74},
url = {https://mlanthology.org/cvprw/2017/zhang2017cvprw-tuning/}
}