Ultra-Fast Device-Free Visible Light Sensing and Localization via Reflection-Based ΔRSS and Deep Learning

Ultra-Fast Device-Free Visible Light Sensing and Localization via Reflection-Based ΔRSS and Deep Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose an Ultra-Fast, Device-Free Visible Light Sensing and Positioning system that captures spatiotemporal variations in single-LED VLC channel responses, using ceiling-mounted photodetectors, to accurately and non-intrusively infer human presence and position through optical signal reflection modeling. The system is highly adaptive and ready to serve different real-world sensing and positioning scenarios using one or more ML based models from the library of multi-architecture deep neural network ensembles we have developed.


💡 Research Summary

The paper presents an ultra‑fast, device‑free visible light sensing and positioning (VLS/VLP) system that leverages a single ceiling‑mounted LED and an array of nine photodiodes (PDs) to infer human presence and 2‑D location with centimeter‑level accuracy. The authors model the indoor VLC channel as a linear baseband system, incorporating up to three orders of diffuse reflections from walls, floor, and ceiling. By treating the empty‑room RSS measurements as a reference, they compute ΔRSS—the difference between reference and occupied‑room RSS—for each PD, forming a nine‑dimensional feature vector that uniquely encodes the target’s position.

A dense grid (49 × 49 points, 0.1 m spacing) covering a 5 × 5 × 3 m³ room is used to generate training data: at each grid point a human subject is placed, ΔRSS values are recorded, and the corresponding (x, y) coordinates are stored. The dataset is split into 60 % training, 20 % validation, and 20 % testing, with spatially balanced partitions achieved via K‑means clustering and 3‑fold cross‑validation to avoid over‑optimistic interpolation between neighboring points.

The core inference engine is an ensemble of deep neural networks: three Multi‑Layer Perceptrons (MLP), three Convolutional Neural Networks (CNN), and three U‑Net models, each trained independently with identical hyper‑parameters (Adam optimizer, 0.001 learning rate, MAE loss, batch size 32, 500 epochs). The MLPs capture global nonlinear mappings, the CNNs exploit the 3 × 3 spatial layout of the PD array, and the U‑Nets reconstruct high‑resolution RSS fingerprints through encoder‑decoder pathways with skip connections. Model outputs are combined using performance‑weighted averaging based on validation mean positioning error (MPE).

Experimental results, obtained with the OWCsim‑Py VLC simulator and TensorFlow on a 4‑core Xeon workstation, show that the lightweight MLP‑only ensemble (≈51 k parameters) achieves the fastest training time (≈10 % of other ensembles) while maintaining an average positioning error of 9.0–9.5 cm for a 25‑step random walk and 10.2–11.2 cm for 100 random static positions. Adding CNN and U‑Net components increases parameter counts to 217 k and 477 k respectively and yields modest gains in specific room regions, but inference latency remains below 0.5 ms for all ensembles, making real‑time deployment on edge devices such as Raspberry Pi feasible.

The authors argue that the ΔRSS‑based, deep‑learning ensemble approach overcomes limitations of traditional fingerprinting (e.g., K‑Nearest Neighbor) and tree‑based regressors (e.g., ExtraTrees), which suffer from high memory footprints, linear inference scaling, and inability to capture spatial correlations inherent in multi‑sensor VLC data. By integrating physical reflection modeling with data‑driven learning, the system delivers a practical, low‑cost solution for indoor human sensing, aligning with emerging 6G ISAC concepts and “Sensing as a Service” use cases. Potential applications include smart building automation, security monitoring, and energy‑efficient lighting control, all without requiring users to carry any device.


Comments & Academic Discussion

Loading comments...

Leave a Comment