CoBA: Integrated Deep Learning Model for Reliable Low-Altitude UAV Classification in mmWave Radio Networks
Uncrewed Aerial Vehicles (UAVs) are increasingly used in civilian and industrial applications, making secure low-altitude operations crucial. In dense mmWave environments, accurately classifying low-altitude UAVs as either inside authorized or restricted airspaces remains challenging, requiring models that handle complex propagation and signal variability. This paper proposes a deep learning model, referred to as CoBA, which stands for integrated Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (BiLSTM), and Attention which leverages Fifth Generation (5G) millimeter-wave (mmWave) radio measurements to classify UAV operations in authorized and restricted airspaces at low altitude. The proposed CoBA model integrates convolutional, bidirectional recurrent, and attention layers to capture both spatial and temporal patterns in UAV radio measurements. To validate the model, a dedicated dataset is collected using the 5G mmWave network at TalTech, with controlled low altitude UAV flights in authorized and restricted scenarios. The model is evaluated against conventional ML models and a fingerprinting-based benchmark. Experimental results show that CoBA achieves superior accuracy, significantly outperforming all baseline models and demonstrating its potential for reliable and regulated UAV airspace monitoring.
💡 Research Summary
The paper introduces CoBA, an integrated deep‑learning architecture that combines a one‑dimensional convolutional neural network (CNN), a bidirectional long short‑term memory network (BiLSTM), and an attention mechanism to classify low‑altitude unmanned aerial vehicles (UAVs) as operating either within authorized airspace or in restricted zones. The authors argue that existing approaches, which largely rely on sub‑6 GHz cellular measurements and conventional machine‑learning classifiers (e.g., decision trees, k‑nearest neighbors, logistic regression), suffer severe performance degradation at altitudes below 50 m due to complex propagation phenomena such as multipath fading, shadowing, and rapid SINR fluctuations.
To address these challenges, the study leverages 5G New Radio (NR) millimeter‑wave (mmWave) measurements collected on the Tallinn University of Technology (TalTech) campus. Four outdoor radio units (RUs) and one indoor RU operating in the n258 band (≈24.3 GHz) provide reference‑signal block (SSB) based metrics: Physical Cell Identity, SSB index, RSSI, SSB‑RSSI, SS‑RSRP, SS‑SINR, and SS‑RSRQ. These seven features are sampled at 5 Hz and organized into time‑series windows of length l, forming input tensors of shape (l × f).
The CoBA pipeline proceeds as follows: (1) two sequential 1‑D CNN layers with layer‑normalization and ReLU extract short‑term spatial patterns and reduce dimensionality to c channels; (2) a BiLSTM processes the CNN output in both forward and backward directions, concatenating hidden states to produce a representation of shape (l × 2h) that captures long‑range temporal dependencies; (3) an attention block computes a scalar score for each time step, normalizes scores with softmax, and aggregates a context vector c that emphasizes the most informative measurements (e.g., sudden RSSI spikes); (4) fully‑connected layers with ReLU and dropout refine the context vector, while a residual connection adds the raw attention output to the final logits, improving gradient flow.
Training employs a weighted cross‑entropy loss to mitigate class imbalance, AdamW optimizer (learning rate 10⁻⁴, weight decay), and gradient clipping for stability. Data are split stratified into 70 % training, 15 % validation, and 15 % test sets; sliding‑window segmentation creates overlapping sequences for each UAV flight. Evaluation metrics include accuracy, precision, recall, and F1‑score.
Experimental results demonstrate that CoBA achieves an overall classification accuracy of 96.8 % and an F1‑score of 95.5 %, markedly surpassing baseline models: decision trees (≈54 % accuracy at ≤30 m), K‑NN (≈62 % at low altitude), logistic regression (≈40 % at 15 m), and a fingerprinting benchmark (≈88 %). Notably, CoBA maintains >94 % accuracy for altitudes ≤30 m, confirming its robustness to the severe channel variability that hampers traditional methods. Attention weight visualizations reveal that the model focuses on specific RU measurements where RSSI or SINR exhibits abrupt changes, aligning with physical intuition about restricted‑zone incursions.
From a computational standpoint, the entire network contains roughly 1.2 million parameters. Inference on an NVIDIA RTX 3080 GPU with batch size 64 averages under 10 ms per sequence, indicating feasibility for real‑time UAV monitoring in operational networks.
The authors acknowledge limitations: the dataset originates from a single campus environment, so external validity across diverse urban or rural settings remains to be proven; the current model, while lightweight relative to large vision networks, may still be too heavy for low‑power edge devices without further pruning or quantization. Future work is suggested on cross‑site generalization, model compression, and integration with higher‑layer network management functions (e.g., automated alerts, dynamic spectrum allocation).
In summary, the paper makes a substantive contribution by demonstrating that a CNN‑BiLSTM‑Attention architecture can effectively exploit rich mmWave radio measurements to achieve reliable, low‑altitude UAV airspace classification, paving the way for more secure and regulated UAV operations within emerging 5G and beyond‑5G ecosystems.
Comments & Academic Discussion
Loading comments...
Leave a Comment