Learning to Perceive and Act by Trial and Error
Abstract
This article considers adaptive control architectures that integrate active sensory-motor systems with decision systems based on reinforcement learning. One unavoidable consequence of active perception is that the agent's internal representation often confounds external world states. We call this phoenomenon Perceptual aliasing and show that it destabilizes existing reinforcement learning algorithms with respect to the optimal decision policy. We then describe a new decision system that overcomes these difficulties for a restricted class of decision problems. The system incorporates a perceptual subcycle within the overall decision cycle and uses a modified learning algorithm to suppress the effects of perceptual aliasing. The result is a control architecture that learns not only how to solve a task but also where to focus its visual attention in order to collect necessary sensory information.
Cite
Text
Whitehead and Ballard. "Learning to Perceive and Act by Trial and Error." Machine Learning, 1991. doi:10.1007/BF00058926Markdown
[Whitehead and Ballard. "Learning to Perceive and Act by Trial and Error." Machine Learning, 1991.](https://mlanthology.org/mlj/1991/whitehead1991mlj-learning/) doi:10.1007/BF00058926BibTeX
@article{whitehead1991mlj-learning,
title = {{Learning to Perceive and Act by Trial and Error}},
author = {Whitehead, Steven D. and Ballard, Dana H.},
journal = {Machine Learning},
year = {1991},
pages = {45-83},
doi = {10.1007/BF00058926},
volume = {7},
url = {https://mlanthology.org/mlj/1991/whitehead1991mlj-learning/}
}