Reinforcement Learning and Apprenticeship Learning for Robotic Control

Abstract

Many robotic control problems, such as autonomous helicopter flight, legged robot locomotion, and autonomous driving, remain challenging even for modern reinforcement learning algorithms. Some of the reasons for these problems being challenging are (i) It can be hard to write down, in closed form, a formal specification of the control task (for example, what is the cost function for “driving well”?), (ii) It is often difficult to learn a good model of the robot’s dynamics, (iii) Even given a complete specification of the problem, it is often computationally difficult to find good closed-loop controller for a high-dimensional, stochastic, control task. However, when we are allowed to learn from a human demonstration of a task—in other words, if we are in the apprenticeship learning setting—then a number of efficient algorithms can be used to address each of these problems.

Cite

Text

Ng. "Reinforcement Learning and Apprenticeship Learning for Robotic Control." International Conference on Algorithmic Learning Theory, 2006. doi:10.1007/11894841_6

Markdown

[Ng. "Reinforcement Learning and Apprenticeship Learning for Robotic Control." International Conference on Algorithmic Learning Theory, 2006.](https://mlanthology.org/alt/2006/ng2006alt-reinforcement/) doi:10.1007/11894841_6

BibTeX

@inproceedings{ng2006alt-reinforcement,
  title     = {{Reinforcement Learning and Apprenticeship Learning for Robotic Control}},
  author    = {Ng, Andrew Y.},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2006},
  pages     = {29-31},
  doi       = {10.1007/11894841_6},
  url       = {https://mlanthology.org/alt/2006/ng2006alt-reinforcement/}
}