Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)
Abstract
The representation approximated by a single deep network is usually limited for reinforcement learning agents. We propose a novel multi-view deep attention network (MvDAN), which introduces multi-view representation learning into the reinforcement learning task for the first time. The proposed model approximates a set of strategies from multiple representations and combines these strategies based on attention mechanisms to provide a comprehensive strategy for a single-agent. Experimental results on eight Atari video games show that the MvDAN has effective competitive performance than single-view reinforcement learning methods.
Cite
Text
Hu et al. "Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7177Markdown
[Hu et al. "Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/hu2020aaai-multi/) doi:10.1609/AAAI.V34I10.7177BibTeX
@inproceedings{hu2020aaai-multi,
title = {{Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)}},
author = {Hu, Yueyue and Sun, Shiliang and Xu, Xin and Zhao, Jing},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {13811-13812},
doi = {10.1609/AAAI.V34I10.7177},
url = {https://mlanthology.org/aaai/2020/hu2020aaai-multi/}
}