The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously

Abstract

This paper introduces the Intentional Unintentional (IU) agent. This agent endows the deep deterministic policy gradients (DDPG) agent for continuous control with the ability to solve several tasks simultaneously. Learning to solve many tasks simultaneously has been a long-standing, core goal of artificial intelligence, inspired by infant development and motivated by the desire to build flexible robot manipulators capable of many diverse behaviours. We show that the IU agent not only learns to solve many tasks simultaneously but it also learns faster than agents that target a single task at-a-time. In some cases, where the single task DDPG method completely fails, the IU agent successfully solves the task. To demonstrate this, we build a playroom environment using the MuJoCo physics engine, and introduce a grounded formal language to automatically generate tasks.

Cite

Text

Cabi et al. "The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously." Conference on Robot Learning, 2017.

Markdown

[Cabi et al. "The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously." Conference on Robot Learning, 2017.](https://mlanthology.org/corl/2017/cabi2017corl-intentional/)

BibTeX

@inproceedings{cabi2017corl-intentional,
  title     = {{The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously}},
  author    = {Cabi, Serkan and Colmenarejo, Sergio Gomez and Hoffman, Matthew W. and Denil, Misha and Wang, Ziyu and de Freitas, Nando},
  booktitle = {Conference on Robot Learning},
  year      = {2017},
  pages     = {207-216},
  url       = {https://mlanthology.org/corl/2017/cabi2017corl-intentional/}
}