Division-based Receiver-agnostic RFF Identification in WiFi Systems
In physical-layer security schemes, radio frequency fingerprint (RFF) identification of WiFi devices is susceptible to receiver differences, which can significantly degrade classification performance when a model is trained on one receiver but tested on another. In this paper, we propose a division-based receiver-agnostic RFF extraction method for WiFi systems, which removes the receivers’ effects by dividing different preambles in the frequency domain. The proposed method requires only a single receiver for training and does not rely on additional calibration or stacking processes. First, for flat fading channel scenarios, the legacy short training field (L-STF) and legacy long training field (L-LTF) of the unknown device are divided by those of the reference device in the frequency domain. The receiver-dependent effects can be eliminated with the requirement of only a single receiver for training, and the higher-dimensional RFF features can be extracted. Second, for frequency-selective fading channel scenarios, the high-throughput long training field (HT-LTF) is divided by the L-LTF in the frequency domain. Only a single receiver is required for training and the higher-dimensional RFF features that are both channel-invariant and receiver-agnostic are extracted. Finally, simulation and experimental results demonstrate that the proposed method effectively mitigate the impacts of channel variations and receiver differences. The classification results show that, even when training on a single receiver and testing on a different one, the proposed method achieves classification accuracy improvements of 15.5% and 28.45% over the state-of-the-art approach in flat fading and frequency-selective fading channel scenarios, respectively.
💡 Research Summary
The paper addresses a critical obstacle in physical‑layer security: radio‑frequency fingerprint (RFF) identification of Wi‑Fi devices degrades sharply when the classifier is trained on data collected by one receiver but tested on another. Existing solutions either require signals from multiple receivers or a calibration step for each new receiver, both of which increase data‑collection effort and deployment latency. To overcome these limitations, the authors propose a division‑based, receiver‑agnostic RFF extraction method that operates entirely in the frequency domain and needs only a single receiver for both training and testing.
The core idea exploits the fact that the Wi‑Fi preamble consists of several known training fields that experience the same channel and receiver effects. By taking the ratio of the frequency‑domain spectra of two such fields, the common multiplicative factors (channel response and receiver impairments) cancel out, leaving only the transmitter‑specific hardware imperfections. Two scenarios are considered:
-
Flat‑fading channels – The legacy short training field (L‑STF) and legacy long training field (L‑LTF) of the unknown device are divided by the corresponding fields of a reference device. This operation removes receiver‑dependent effects while preserving a high‑dimensional feature set (12 subcarriers from L‑STF and 52 from L‑LTF, yielding a 64‑dimensional complex vector). The method therefore extracts richer fingerprints than prior work that relied solely on the 12‑subcarrier short preamble.
-
Frequency‑selective fading channels – In multipath environments the simple L‑STF/L‑LTF division is insufficient. The authors instead divide the high‑throughput long training field (HT‑LTF), which occupies 54 subcarriers, by the L‑LTF within the same frame. Because both fields traverse the same channel, the division inherently normalizes the channel frequency response, eliminating the need for an external reference device. The resulting features are both channel‑invariant and receiver‑agnostic.
Signal preprocessing includes energy‑based detection, fine synchronization using the repeated L‑STF pattern, and carrier‑frequency‑offset (CFO) compensation derived from the phase of L‑STF. After FFT (64‑point) on the selected sample windows, the division is performed, and the complex ratios are split into real and imaginary parts to form a 2×64‑dimensional input vector.
For classification, the authors employ an InceptionTime deep‑learning architecture, which processes the multi‑scale temporal patterns of the extracted features. Experiments were conducted with commercial Wi‑Fi routers and smartphones, collecting 2,000 frames per device under both flat‑fading (line‑of‑sight) and frequency‑selective (indoor multipath) conditions. A single USRP N210 receiver was used for all data acquisition, demonstrating that no additional receivers are needed for training.
Results show that the proposed method achieves 98.47 % accuracy in flat‑fading and 94.91 % accuracy in frequency‑selective fading, even when the test receiver differs from the training receiver. Compared with the state‑of‑the‑art approach, the improvements are 15.5 % and 28.45 %, respectively. The method also outperforms CNN‑based, Random‑Forest, and earlier division‑based 12‑dimensional techniques, confirming that higher‑dimensional, receiver‑agnostic features significantly boost classification performance.
Key contributions are:
- A mathematically grounded division operation that cancels both channel and receiver effects without requiring multiple receivers or calibration.
- Extraction of high‑dimensional RFF features from standard Wi‑Fi preambles, leading to superior classification.
- Demonstration of practical feasibility through extensive real‑world experiments on off‑the‑shelf hardware.
Limitations include sensitivity to preamble corruption or synchronization errors, and potential degradation in high‑mobility scenarios where channel coherence may not hold across the two fields. Future work suggested by the authors involves adaptive division schemes that incorporate dynamic channel estimation, open‑set identification strategies, hardware acceleration (FPGA/ASIC), and extension to other wireless standards such as 5G NR and BLE.
Comments & Academic Discussion
Loading comments...
Leave a Comment