A Distributed Dynamic Frequency Allocation Algorithm
We consider a network model where the nodes are grouped into a number of clusters and propose a distributed dynamic frequency allocation algorithm that achieves performance close to that of a centralized optimal algorithm. Each cluster chooses its transmission frequency band based on its knowledge of the interference that it experiences. The convergence of the proposed distributed algorithm to a sub-optimal frequency allocation pattern is proved. For some specific cases of spatial distributions of the clusters in the network, asymptotic bounds on the performance of the algorithm are derived and comparisons to the performance of optimal centralized solutions are made. These analytic results and additional simulation studies verify performance close to that of an optimum centralized frequency allocation algorithm. It is demonstrated that the algorithm achieves about 90% of the Shannon capacities corresponding to the optimum/near-optimum centralized frequency band assignments. Furthermore, we consider the scenario where each cluster can be in active or inactive mode according to a two-state Markov model. We derive conditions to guarantee finite steady state variance for the output of the algorithm using stochastic analysis. Further simulation studies confirm the results of stochastic modeling and the performance of the algorithm in the time-varying setup.
💡 Research Summary
The paper addresses the problem of allocating frequency bands in a wireless ad‑hoc or cognitive‑radio network where a central controller is unavailable. The authors assume that the network is already partitioned into N clusters, each represented by a cluster head that can measure the interference it experiences on each available band. Transmission power is assumed equal for all nodes, and the channel model follows a simple path‑loss law with exponent η, ignoring shadowing and fast fading for analytical tractability.
The core contribution is a fully distributed, asynchronous algorithm (the “Main Algorithm”). At each update instant a single cluster scans all r frequency bands, measures the aggregate interference that would be incurred if it transmitted on each band (interference comes only from other clusters using the same band), and then switches to the band that yields the smallest measured interference. Updates are assumed to be rare enough that two clusters never update simultaneously; this reflects the lack of a common clock in real ad‑hoc deployments. The algorithm requires no exchange of state information between clusters and relies only on local interference measurements.
The authors prove convergence (Theorem 4.1) by showing that a global interference potential function strictly decreases at each update and that the finite state space guarantees termination at a local minimum in polynomial time. They then derive performance bounds. The first bound (Theorem 4.2) states that after convergence the total interference I_a is at most 1/r of the worst‑case interference I_w (the case where all clusters share a single band). This demonstrates that simply increasing the number of bands yields a linear reduction in interference.
Because a general spatial configuration makes tighter analysis difficult, the paper focuses on a specific topology: a uniform linear array where clusters lie on a line with equal spacing d. For this case, the authors obtain asymptotic expressions as N → ∞. Theorem 4.4 shows that the optimal (minimum‑interference) strategy achieves an average per‑cluster interference of at least (1/r^η)·ζ(η)·P_0·d^{‑η}, where ζ(·) is the Riemann zeta function. Theorem 4.5 proves that the uniform linear array actually attains this bound, and Theorem 4.6 identifies the optimal assignment when r = 2 and η ≥ 2: an alternating pattern of the two bands. Corollaries 4.7 and 4.8 translate these results into explicit ratios between the interference achieved by the distributed algorithm and the optimal interference, showing that the algorithm’s performance approaches the optimum as the number of clusters grows.
The paper also extends the model to time‑varying activity. Each cluster can be active or inactive according to an independent two‑state Markov chain. By modeling the evolution of the aggregate interference as a stochastic differential equation with exponential decay, the authors derive conditions under which the steady‑state variance of the interference remains bounded. This analysis yields a trade‑off among the update rate, the switching probability between sleep and active modes, and the geometric properties of the node distribution.
Simulation results support the theoretical findings. Experiments on both random two‑dimensional deployments and the uniform linear array confirm that the algorithm converges quickly, that the measured aggregate interference matches the derived bounds, and that the resulting Shannon capacity is roughly 90 % of that obtained by a centralized optimal allocation. Moreover, varying the Markov transition probabilities shows that the algorithm’s performance is robust to temporal fluctuations in cluster activity.
In summary, the paper makes four principal contributions: (1) a simple, fully distributed, asynchronous frequency‑selection mechanism that requires only local interference sensing; (2) a rigorous convergence proof and polynomial‑time complexity guarantee; (3) tight asymptotic performance bounds for a linear array topology, including identification of the optimal alternating assignment for two bands; and (4) a stochastic analysis of the algorithm’s behavior under time‑varying cluster activity, establishing conditions for finite steady‑state variance. These results provide a practical foundation for implementing dynamic spectrum allocation in real‑world networks where centralized control is infeasible, offering low implementation complexity while achieving performance close to that of an optimal centralized solution.
Comments & Academic Discussion
Loading comments...
Leave a Comment