A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling

Abstract

Finding optimal solutions for job shop scheduling problems requires high computational effort, especially under consideration of uncertainty and frequent replanning. In contrast to computational solutions, domain experts are often able to derive good local dispatching heuristics by looking at typical problem instances. They can be efficiently applied by looking at few relevant features. However, these rules are usually not optimal, especially in complex decision situations. Here we describe an approach that tries to combine both worlds. A neural network based agent autonomously optimizes its local dispatching policy with respect to a global optimization goal, defined for the overall plant. On two benchmark scheduling problems, we show both learning and generalization abilities of the proposed approach.

Cite

Text

Riedmiller and Riedmiller. "A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling." International Joint Conference on Artificial Intelligence, 1999.

Markdown

[Riedmiller and Riedmiller. "A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling." International Joint Conference on Artificial Intelligence, 1999.](https://mlanthology.org/ijcai/1999/riedmiller1999ijcai-neural/)

BibTeX

@inproceedings{riedmiller1999ijcai-neural,
  title     = {{A Neural Reinforcement Learning Approach to Learn Local Dispatching Policies in Production Scheduling}},
  author    = {Riedmiller, Simone C. and Riedmiller, Martin A.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1999},
  pages     = {764-771},
  url       = {https://mlanthology.org/ijcai/1999/riedmiller1999ijcai-neural/}
}