Why Learn if You Can Infer? Robot Arm Control with Hierarchical Active Inference
Abstract
Recently deep reinforcement learning (RL) approaches have become successful in a wide range of domains, including robot control. Using learning instead of classical control approaches is appealing, as it avoids dealing with redundancy in over-actuated arms, hard-coding obstacle avoidance, and performing inverse kinematics calculations. However, this comes at the cost of excessive training data to fit a black-box model. In this paper, we cast motor control as an inference problem on a generative model that pertains to the robot arm's kinematic chain structure, which might be a more bio-mimetic implementation. We demonstrate that we retain both the attractive properties of RL and the efficiency of more classical forward kinematics approaches without requiring expensive training, achieving superior success rates as the degrees of freedom of the arm increase.
Cite
Text
Pezzato et al. "Why Learn if You Can Infer? Robot Arm Control with Hierarchical Active Inference." NeurIPS 2024 Workshops: NeuroAI, 2024.Markdown
[Pezzato et al. "Why Learn if You Can Infer? Robot Arm Control with Hierarchical Active Inference." NeurIPS 2024 Workshops: NeuroAI, 2024.](https://mlanthology.org/neuripsw/2024/pezzato2024neuripsw-learn/)BibTeX
@inproceedings{pezzato2024neuripsw-learn,
title = {{Why Learn if You Can Infer? Robot Arm Control with Hierarchical Active Inference}},
author = {Pezzato, Corrado and Buckley, Christopher and Verbelen, Tim},
booktitle = {NeurIPS 2024 Workshops: NeuroAI},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/pezzato2024neuripsw-learn/}
}