A Reinforcement Learning Approach for Intelligent Traffic Signal Control at Urban Intersections

A Reinforcement Learning Approach for Intelligent Traffic Signal Control at Urban Intersections

Abstract: Ineffective and inflexible traffic signal control at urban intersections can often lead to bottlenecks in traffic flows and cause congestion, delay, and environmental problems. How to manage traffic smartly by intelligent signal control is a significant challenge in urban traffic management. With recent advances in machine learning, especially reinforcement learning (RL), traffic signal control using advanced machine learning techniques represents a promising solution to tackle this problem. In this paper, we propose a RL approach for traffic signal control at urban intersections. Specifically, we use neural networks as Q-function approximator (a.k.a. Q-network) to deal with the complex traffic signal control problem where the state space is large and the action space can be discrete. The state space is defined based on real-time traffic information, i.e. vehicle position, direction and speed. The action space includes various traffic signal phases which are critical in generating a reasonable and realistic control mechanism, given the prominent spatial-temporal characteristics of urban traffic. In the simulation experiment, we use SUMO, an open source traffic simulator, to construct realistic urban intersection settings. Moreover, we use different traffic patterns, such as major/minor road traffic, through/left-turn lane traffic, tidal traffic, and varying demand traffic, to train a generalized traffic signal control model that can be adapted to various traffic conditions. The simulation results demonstrate the convergence and generalization performance of our RL approach as well as its significant benefits in terms of queue length and wait time over several benchmarking methods in traffic signal control.