AROLA: A Modular Layered Architecture for Scaled Autonomous Racing

AROLA: A Modular Layered Architecture for Scaled Autonomous Racing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Autonomous racing has advanced rapidly, particularly on scaled platforms, and software stacks must evolve accordingly. In this work, AROLA is introduced as a modular, layered software architecture in which fragmented and monolithic designs are reorganized into interchangeable layers and components connected through standardized ROS 2 interfaces. The autonomous-driving pipeline is decomposed into sensing, pre-processing, perception, localization and mapping, planning, behavior, control, and actuation, enabling rapid module replacement and objective benchmarking without reliance on custom message definitions. To support consistent performance evaluation, a Race Monitor framework is introduced as a lightweight system through which lap timing, trajectory quality, and computational load are logged in real time and standardized post-race analyses are generated. AROLA is validated in simulation and on hardware using the RoboRacer platform, including deployment at the 2025 RoboRacer IV25 competition. Together, AROLA and Race Monitor demonstrate that modularity, transparent interfaces, and systematic evaluation can accelerate development and improve reproducibility in scaled autonomous racing.


💡 Research Summary

The paper presents AROLA (Autonomous Racing Open Layered Architecture), a modular, ROS 2‑based software framework designed to address the fragmentation and monolithic nature of existing stacks for scaled autonomous racing platforms such as RoboRacer. AROLA decomposes the autonomous‑driving pipeline into eight clearly defined layers: sensing, preprocessing, perception, localization & mapping, planning, behavior, control, and actuation. Each layer is implemented as an interchangeable ROS 2 node (or set of nodes) that communicates through standardized message types (e.g., LaserScan, Odometry, OccupancyGrid, AckermannDriveStamped) and tf transforms. By minimizing custom messages, the architecture promotes portability, ease of integration with existing ROS tools, and straightforward swapping of algorithms or hardware components.

The authors emphasize a “top‑to‑bottom” sequential data flow while allowing optional cross‑layer feedback when needed, thereby preserving modularity without sacrificing flexibility for high‑speed maneuvers. The architecture is hardware‑agnostic; it has been deployed on a Jetson‑powered RoboRacer equipped with a Hokuyo LiDAR and a VESC motor controller, as well as in multiple simulators (the official RoboRacer simulator and a Forza‑based environment). This demonstrates that the same layered configuration can run unchanged across diverse platforms.

To enable systematic performance evaluation, the paper introduces Race Monitor, an open ROS 2 monitoring suite that logs lap count, lap time, trajectory quality, CPU/GPU load, and other telemetry in real time. Data are published on dedicated topics and recorded via ros2bag; post‑race analysis leverages the evo package to compute odometry and SLAM metrics automatically. Configuration is driven by a simple YAML file, and reference trajectories can be supplied in CSV, TUM, or KITTI formats. The monitor thus provides a reproducible benchmark for comparing different controllers or pipeline variants under identical conditions.

Experimental validation consists of two parts. First, during preparation for the 2025 RoboRacer IV25 competition, the authors evaluated three controllers—Gap Follower, Pure Pursuit, and Model Predictive Control (MPC)—using the same AROLA stack. Pure Pursuit achieved the fastest average lap time (10.35 s) and high consistency (0.98), while MPC offered low tracking error but incurred significantly higher CPU load (55 %) and control latency (42 ms). Gap Follower was the most reactive but the slowest (12.85 s). The chosen Pure Pursuit configuration ultimately secured third place in the competition with a best lap of 10.1 s, only 0.02 s behind the runner‑up.

Second, an external LQR controller was evaluated on a “Berlin Map” after the competition. Over seven laps, the LQR achieved an average lap time of 19.75 s, a consistency score of 0.99, and modest CPU usage (22 %). While the Absolute Pose Error (APE) was higher due to a scaling mismatch in the reference trajectory, the Relative Pose Error (RPE) remained low, confirming stable tracking.

The authors discuss limitations: the current experimental focus is primarily on the control layer, leaving perception, mapping, and planning modules under‑explored; error propagation across pipeline stages remains a challenge; and the architecture has yet to be tested with a broader set of learning‑based components. Future work is proposed to integrate diverse deep‑learning perception and planning modules, incorporate uncertainty modeling, and develop mechanisms to mitigate error propagation.

In conclusion, AROLA together with Race Monitor offers the autonomous‑racing community a standardized, extensible software backbone and a unified benchmarking tool. This combination accelerates development cycles, facilitates fair performance comparison, and improves reproducibility, thereby advancing research on scaled autonomous racing.


Comments & Academic Discussion

Loading comments...

Leave a Comment