ValueNetQP: Learned One-Step Optimal Control for Legged Locomotion

Abstract

Optimal control is a successful approach to generate motions for complex robots, in particular for legged locomotion. However, these techniques are often too slow to run in real time for model predictive control or one needs to drastically simplify the dynamics model. In this work, we present a method to learn to predict the gradient and hessian of the problem value function, enabling fast resolution of the predictive control problem with a one-step quadratic program. In addition, our method is able to satisfy constraints like friction cones and unilateral constraints, which are important for high dynamics locomotion tasks. We demonstrate the capability of our method in simulation and on a real quadruped robot performing trotting and bounding motions.

Cite

Text

Viereck et al. "ValueNetQP: Learned One-Step Optimal Control for Legged Locomotion." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.

Markdown

[Viereck et al. "ValueNetQP: Learned One-Step Optimal Control for Legged Locomotion." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.](https://mlanthology.org/l4dc/2022/viereck2022l4dc-valuenetqp/)

BibTeX

@inproceedings{viereck2022l4dc-valuenetqp,
  title     = {{ValueNetQP: Learned One-Step Optimal Control for Legged Locomotion}},
  author    = {Viereck, Julian and Meduri, Avadesh and Righetti, Ludovic},
  booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference},
  year      = {2022},
  pages     = {931-942},
  volume    = {168},
  url       = {https://mlanthology.org/l4dc/2022/viereck2022l4dc-valuenetqp/}
}