Human-Centric Traffic Signal Control for Equity: A Multi-Agent Action Branching Deep Reinforcement Learning Approach

Human-Centric Traffic Signal Control for Equity: A Multi-Agent Action Branching Deep Reinforcement Learning Approach
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Coordinating traffic signals along multimodal corridors is challenging because many multi-agent deep reinforcement learning (DRL) approaches remain vehicle-centric and struggle with high-dimensional discrete action spaces. We propose MA2B-DDQN, a human-centric multi-agent action-branching double Deep Q-Network (DQN) framework that explicitly optimizes traveler-level equity. Our key contribution is an action-branching discrete control formulation that decomposes corridor control into (i) local, per-intersection actions that allocate green time between the next two phases and (ii) a single global action that selects the total duration of those phases. This decomposition enables scalable coordination under discrete control while reducing the effective complexity of joint decision-making. We also design a human-centric reward that penalizes the number of delayed individuals in the corridor, accounting for pedestrians, vehicle occupants, and transit passengers. Extensive evaluations across seven realistic traffic scenarios in Melbourne, Australia, demonstrate that our approach significantly reduces the number of impacted travelers, outperforming existing DRL and baseline methods. Experiments confirm the robustness of our model, showing minimal variance across diverse settings. This framework not only advocates for a fairer traffic signal system but also provides a scalable solution adaptable to varied urban traffic conditions.


💡 Research Summary

The paper addresses the challenge of coordinating traffic signals along multimodal corridors by introducing a novel multi‑agent deep reinforcement learning (DRL) framework called MA2B‑DDQN (Multi‑Agent Action‑Branching Double Deep Q‑Network). Traditional DRL approaches for traffic signal control (TSC) have been vehicle‑centric and struggle with the combinatorial explosion of discrete action spaces when scaling to multiple intersections. MA2B‑DDQN overcomes these limitations through two key innovations.

First, the authors propose an action‑branching formulation that decomposes the joint control problem into (i) local actions at each intersection, which allocate green time between the next two signal phases, and (ii) a single global action that selects the total duration of those two phases for the entire corridor. This decomposition reduces the effective joint action space from exponential in the number of agents to a product of the number of local options per agent and the number of global options, enabling scalable coordination while retaining the ability to synchronize timing across intersections.

Second, the framework adopts a double DQN architecture to mitigate Q‑value overestimation. A shared state encoder feeds into separate branches: one set of branches for the local per‑intersection actions and one branch for the global duration action. Each branch outputs its own Q‑values, and the overall policy selects the combination of local and global actions that maximizes the summed Q‑value. Experience replay and a target network are employed as in standard DDQN, ensuring stable learning.

Crucially, the authors shift the optimization objective from vehicle‑centric metrics (queue length, waiting time, throughput) to a human‑centric reward that directly penalizes the number of delayed individuals across all modes—private‑vehicle occupants, public‑transit passengers, cyclists, and pedestrians. The reward function aggregates delayed counts weighted by modality, thereby encouraging equitable treatment of all road users. Real‑time multimodal user counts are assumed to be available via advanced sensors and AI‑driven estimation (e.g., Bluetooth low‑energy scanners for pedestrians, on‑board passenger counting for buses, occupancy detection for cars).

The experimental evaluation uses the open‑source microscopic traffic simulator SUMO to recreate seven realistic traffic scenarios in Melbourne, covering peak‑hour congestion, special events, adverse weather, and mixed‑traffic conditions. MA2B‑DDQN is benchmarked against a suite of baselines: traditional adaptive systems (SCOOT, SCATS), single‑agent DRL methods (DQN, PPO, DDPG), and recent multi‑agent DRL approaches (independent Q‑learning, centralized critic‑actor). Performance is measured primarily by the total number of delayed individuals, with secondary metrics including average vehicle delay, pedestrian waiting time, and overall throughput.

Results show that MA2B‑DDQN reduces delayed individuals by 15–28 % relative to the best DRL baselines and by up to 30 % for pedestrians and transit passengers. The variance across scenarios is low (standard deviation < 5 %), indicating robustness to diverse traffic patterns. The global duration action proves effective at aligning phase lengths across intersections, yielding smoother corridor‑wide flow without sacrificing local responsiveness.

The authors acknowledge limitations: the single global duration action may be insufficient for very long corridors or highly heterogeneous networks where finer‑grained timing control is needed. Moreover, the approach relies on accurate, low‑latency multimodal sensor data; errors in occupancy estimation could degrade performance. Future work is proposed to (i) extend the global action to a multi‑step duration schedule, (ii) incorporate meta‑learning or transfer learning to adapt to sensor noise and unseen network topologies, and (iii) test the framework on hardware‑in‑the‑loop platforms for real‑world deployment.

In summary, MA2B‑DDQN presents a scalable, equity‑focused solution for multi‑intersection traffic signal control, combining action‑branching to tame discrete action spaces with a human‑centric reward that directly optimizes fairness across all road users. The extensive empirical validation demonstrates both effectiveness and robustness, positioning the method as a promising candidate for next‑generation adaptive traffic management systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment