Learning Machines That Perceive, Act and Communicate

Abstract

Humans are very good in perceiving all kinds of high-dimensional sensory inputs, extracting the meaningful information and acting on that information to pursue their goals. Having this in mind, our vision is a learning system, that takes raw, potentially high-dimensional sensory inputs (e.g. raw image data), extracts the relevant information, and learns to act by experiencing success or failure. In this talk I will provide some first successful examples along this line of research. In particular, I will discuss neural network based architectures and algorithms that are the basic building blocks of our neural control architecture.

Cite

Text

Riedmiller. "Learning Machines That Perceive, Act and Communicate." International Joint Conference on Artificial Intelligence, 2013. doi:10.1145/2493525.2493526

Markdown

[Riedmiller. "Learning Machines That Perceive, Act and Communicate." International Joint Conference on Artificial Intelligence, 2013.](https://mlanthology.org/ijcai/2013/riedmiller2013ijcai-learning/) doi:10.1145/2493525.2493526

BibTeX

@inproceedings{riedmiller2013ijcai-learning,
  title     = {{Learning Machines That Perceive, Act and Communicate}},
  author    = {Riedmiller, Martin A.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2013},
  pages     = {5},
  doi       = {10.1145/2493525.2493526},
  url       = {https://mlanthology.org/ijcai/2013/riedmiller2013ijcai-learning/}
}