Learning Attention Model from Human for Visuomotor Tasks
Abstract
A wealth of information regarding intelligent decision making is conveyed by human gaze and visual attention, hence, modeling and exploiting such information might be a promising way to strengthen algorithms like deep reinforcement learning. We collect high-quality human action and gaze data while playing Atari games. Using these data, we train a deep neural network that can predict human gaze positions and visual attention with high accuracy.
Cite
Text
Zhang et al. "Learning Attention Model from Human for Visuomotor Tasks." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12147Markdown
[Zhang et al. "Learning Attention Model from Human for Visuomotor Tasks." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/zhang2018aaai-learning-a/) doi:10.1609/AAAI.V32I1.12147BibTeX
@inproceedings{zhang2018aaai-learning-a,
title = {{Learning Attention Model from Human for Visuomotor Tasks}},
author = {Zhang, Luxin and Zhang, Ruohan and Liu, Zhuode and Hayhoe, Mary M. and Ballard, Dana H.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8181-8182},
doi = {10.1609/AAAI.V32I1.12147},
url = {https://mlanthology.org/aaai/2018/zhang2018aaai-learning-a/}
}