Sequential Task Assignment and Resource Allocation in V2X-Enabled Mobile Edge Computing

Sequential Task Assignment and Resource Allocation in V2X-Enabled Mobile Edge Computing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Nowadays, the convergence of Mobile Edge Computing (MEC) and vehicular networks has emerged as a vital facilitator for the ever-increasing intelligent onboard applications. This paper proposes a multi-tier task offloading mechanism for MEC-enabled vehicular networks leveraging vehicle-to-everything (V2X) communications. The study focuses on applications with sequential subtasks and explores the collaboration of two tiers. In the vehicle tier, we design a needing vehicle (NV)-helping vehicle (HV) matching scheme and inter-vehicle collaborative computation is studied, with joint optimization of task offloading decision, communication, and computation resource allocation to minimize energy consumption and meet delay requirements. In the roadside unit (RSU) tier, collaboration among RSUs is investigated to further address multi-access issues of subchannel and computation resources for multiple vehicles. A two-step method is designed to first obtain optimal continuous solutions of multifaceted variables, and then derive the solution for discrete uplink subchannel allocation with low complexity. Detailed experiments are conducted to demonstrate the proposed method reduces average energy consumption by at least 15% compared with benchmarks under varying task delay requirements and numbers of vehicles and assess the impact of various parameters on system energy consumption.


💡 Research Summary

The paper addresses the emerging challenge of executing computation‑intensive, latency‑sensitive vehicular applications—particularly vision‑based tasks such as object detection and traffic‑sign recognition—within a Mobile Edge Computing (MEC) environment that leverages Vehicle‑to‑Everything (V2X) communications. Unlike most prior works that treat tasks as either wholly offloaded or arbitrarily divisible, this study explicitly models tasks as a sequence of dependent subtasks (e.g., layers of a deep neural network) and proposes a two‑tier offloading framework that jointly optimizes task assignment, communication, and computing resources across both vehicle‑to‑vehicle (V2V) and vehicle‑to‑infrastructure (V2I) links.

System Overview
The system consists of (i) a Vehicle Tier where “needing vehicles” (NVs) that cannot meet their deadline locally seek assistance from idle “helping vehicles” (HVs), and (ii) an RSU Tier where roadside units (RSUs) equipped with MEC servers collaboratively process tasks that could not be matched in the Vehicle Tier. Vehicles and RSUs are modeled in a 2‑D Cartesian plane; each RSU has a total bandwidth B divided into b orthogonal sub‑channels, and each RSU may have a different number of CPU cores (i.e., computing capacity).

Vehicle‑Tier Design

  1. NV‑HV Matching – A bipartite matching problem is formulated to maximize the number of NVs that can be paired with suitable HVs. The matching metric incorporates relative speed, Euclidean distance, path‑loss (including a log‑distance model), available computing cycles, and the deadline of each subtask.
  2. Collaborative Computation Model – Once matched, an NV processes its early subtasks locally; from a designated subtask (m_h) onward, the remaining subtasks are offloaded to the HV. The V2V transmission delay (\tau_{n,h}) follows Shannon’s capacity formula: (\tau_{n,h}=W_{n,m_h-1,m_h}/(B_{V2V}\log_2(1+SNR_{n,h})). Transmission energy is the product of transmit power and (\tau_{n,h}). Computation delay for subtask (m) is (\tau_{c,n,m}=C_{n,m}/(f_{n,m}f_{h,m})) where (f_{n,m}) and (f_{h,m}) denote the fractions of the NV’s and HV’s CPU allocated to that subtask. Computation energy follows the widely used dynamic‑power model (E_{c,n,m}= \kappa C_{n,m}(f_{n,m}f_{h,m})^2).
  3. Optimization Objective – The total energy (transmission + computation) of all NV‑HV pairs is minimized subject to (i) per‑task deadline constraints (\sum (\tau_{n,h}+\tau_{c,n,m})\le T_{n}^{max}), (ii) CPU capacity limits for each vehicle, and (iii) binary matching constraints. By relaxing binary variables and applying Lagrange multipliers, the problem becomes convex; KKT conditions yield closed‑form expressions for optimal resource fractions, which are solved via an interior‑point method.

RSU‑Tier Design
NVs that cannot find an HV are offloaded to the RSU tier. Here the challenges are twofold: (a) allocating uplink sub‑channels among multiple NVs sharing the same RSU, and (b) distributing the MEC server’s computing cycles among the sequential subtasks that may span several RSUs.

  1. Transmission Model – Each NV transmits the initial input data (size (W_{n,0})) to its serving RSU over a set of (b_n) sub‑channels. The uplink delay is (\tau_n = W_{n,0}/(b_n B \log_2(1+SNR_{n,RSU}))).
  2. Computing Allocation – Subtasks are partitioned across RSUs: RSU (r) handles subtasks (C_{n,m_r}) … (C_{n,m_{r+1}-1}). The computing delay on RSU (r) for subtask (m) is (\tau_{c,r,n,m}=C_{n,m}/f_{r,n,m}) where (f_{r,n,m}) is the fraction of RSU (r)’s CPU allocated.
  3. Joint Continuous Optimization – The objective again is total energy minimization (uplink transmission energy plus RSU computation energy) under deadline constraints. By relaxing the integer sub‑channel variable (b_n) to a continuous value, the problem becomes convex and is solved analytically for optimal (\tau_n), (b_n), and (f_{r,n,m}).
  4. Discrete Sub‑Channel Recovery – The continuous solution yields a non‑integer (b_n^\star). An adjacent‑integer point search is performed: the algorithm evaluates the objective at (\lfloor b_n^\star \rfloor) and (\lceil b_n^\star \rceil) and selects the lower‑cost integer. Because the number of sub‑channels per RSU is modest (tens), this step incurs negligible computational overhead.

Algorithmic Flow

  1. Perform NV‑HV bipartite matching.
  2. Solve the vehicle‑tier convex resource allocation.
  3. Identify unmatched NVs and forward them to the RSU tier.
  4. Solve the RSU‑tier convex allocation, then apply integer sub‑channel refinement.
  5. Deploy the resulting schedule to the network.

Complexity and Convergence

  • Matching: (O(N\cdot H)).
  • Convex allocations: polynomial time (interior‑point, typically (O((N+R)^3))).
  • Integer refinement: linear in the number of sub‑channels (b). Overall runtime is suitable for real‑time vehicular scenarios.

Performance Evaluation
Simulations vary the number of vehicles (20–100), deadline requirements (50–200 ms), V2V/V2I channel conditions, and RSU computing capacities. Benchmarks include (a) a single‑RSU offloading scheme, (b) random NV‑HV pairing, and (c) an ADMM‑based distributed offloading method. Metrics: average energy consumption, deadline‑satisfaction ratio, and matching success rate. Results show:

  • The proposed scheme reduces average energy consumption by at least 15 % compared with all baselines.
  • Deadline satisfaction exceeds 95 % across all tested configurations.
  • Matching success reaches >80 % even when vehicle density is high.
  • Sensitivity analysis demonstrates that increasing V2V bandwidth or RSU CPU resources yields diminishing returns beyond a certain point, highlighting the importance of balanced resource provisioning.

Conclusions and Future Work
The paper delivers a holistic V2X‑MEC offloading framework that respects sequential subtask dependencies, jointly optimizes vehicle‑level collaboration and RSU‑level multi‑access, and solves a mixed continuous‑discrete problem with low computational complexity. Future directions include predictive mobility‑aware matching, reinforcement‑learning‑driven adaptive offloading under stochastic channel states, and global load‑balancing among multiple RSUs to further improve scalability and robustness.


Comments & Academic Discussion

Loading comments...

Leave a Comment