Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System

Abstract

In automated material handling systems (AMHS), delivery time is an important issue directly associated with the production cost and the quality of the product. In this paper, we propose a dynamic routing strategy to shorten delivery time and delay. We set the target of control by analyzing traffic flows and selecting the region with the highest flow rate and congestion frequency. Then, we impose a routing cost in order to dynamically reflect the real-time changes of traffic states. Our deep reinforcement learning model consists of a Q-learning step and a recurrent neural network, through which traffic states and action values are predicted. Experiment results show that the proposed method decreases manufacturing costs while increasing productivity. Additionally, we find evidence the reinforcement learning structure proposed in this study can autonomously and dynamically adjust to the changes in traffic patterns.

Cite

Text

Kang et al. "Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33019949

Markdown

[Kang et al. "Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/kang2019aaai-dynamic/) doi:10.1609/AAAI.V33I01.33019949

BibTeX

@inproceedings{kang2019aaai-dynamic,
  title     = {{Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System}},
  author    = {Kang, Younkook and Lyu, Sungwon and Kim, Jeeyung and Park, Bongjoon and Cho, Sungzoon},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {9949-9950},
  doi       = {10.1609/AAAI.V33I01.33019949},
  url       = {https://mlanthology.org/aaai/2019/kang2019aaai-dynamic/}
}