Natural Actor-Critic for Road Traffic Optimisation
Abstract
Current road-traffic optimisation practice around the world is a combination of hand tuned policies with a small degree of automatic adaption. Even state-ofthe-art research controllers need good models of the road traffic, which cannot be obtained directly from existing sensors. We use a policy-gradient reinforcement learning approach to directly optimise the traffic signals, mapping currently deployed sensor observations to control signals. Our trained controllers are (theoretically) compatible with the traffic system used in Sydney and many other cities around the world. We apply two policy-gradient methods: (1) the recent natural actor-critic algorithm, and (2) a vanilla policy-gradient algorithm for comparison. Along the way we extend natural-actor critic approaches to work for distributed and online infinite-horizon problems.
Cite
Text
Richter et al. "Natural Actor-Critic for Road Traffic Optimisation." Neural Information Processing Systems, 2006.Markdown
[Richter et al. "Natural Actor-Critic for Road Traffic Optimisation." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/richter2006neurips-natural/)BibTeX
@inproceedings{richter2006neurips-natural,
title = {{Natural Actor-Critic for Road Traffic Optimisation}},
author = {Richter, Silvia and Aberdeen, Douglas and Yu, Jin},
booktitle = {Neural Information Processing Systems},
year = {2006},
pages = {1169-1176},
url = {https://mlanthology.org/neurips/2006/richter2006neurips-natural/}
}