Real-Time Prediction of Lower Limb Joint Kinematics, Kinetics, and Ground Reaction Force using Wearable Sensors and Machine Learning

Real-Time Prediction of Lower Limb Joint Kinematics, Kinetics, and Ground Reaction Force using Wearable Sensors and Machine Learning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Walking is a key movement of interest in biomechanics, yet gold-standard data collection methods are time- and cost-expensive. This paper presents a real-time, multimodal, high sample rate lower-limb motion capture framework, based on wireless wearable sensors and machine learning algorithms. Random Forests are used to estimate joint angles from IMU data, and ground reaction force (GRF) is predicted from instrumented insoles, while joint moments are predicted from angles and GRF using deep learning based on the ResNet-16 architecture. All three models achieve good accuracy compared to literature, and the predictions are logged at 1 kHz with a minimal delay of 23 ms for 20s worth of input data. The present work fully relies on wearable sensors, covers all five major lower limb joints, and provides multimodal comprehensive estimations of GRF, joint angles, and moments with minimal delay suitable for biofeedback applications.


💡 Research Summary

This paper presents a fully wearable, real‑time framework for estimating lower‑limb joint kinematics, kinetics, and vertical ground reaction force (vGRF) during walking. The system relies exclusively on wireless inertial measurement units (IMUs) placed on the shanks and feet and on instrumented insoles equipped with force‑sensitive resistor (FSR) based force‑myography (FMG) sensors. Data from the IMUs are streamed at 25 Hz, up‑sampled to 1 kHz, filtered, and combined with gait‑cycle percentage (GC%) and a leading‑foot flag to form the feature set.

Joint angles and vGRF are predicted using Random Forest (RF) regressors. The angle models ingest nine channels per IMU (accelerometer, gyroscope, magnetometer) plus GC% and the foot‑flag; the vGRF model uses the eight FSR channels and GC%. Both models are trained with 200 trees (angle) and 100 trees (GRF) under two validation schemes: intra‑subject k‑fold (k = 5 for angles, k = 4 for moments) and inter‑subject leave‑one‑subject‑out cross‑validation (LOSOCV). The RF approach was chosen for its robustness to small datasets, low latency, and interpretability.

Joint moments are estimated with a deep learning model based on a 1‑D ResNet‑16 architecture. Input to the moment network consists of the predicted joint angles, predicted vGRF, GC%, and the foot‑flag, arranged in 10 ms sliding windows. The network comprises an initial 1‑D convolutional block, four residual blocks, global average pooling, and a dense output layer with two neurons (angle‑derived moment and vGRF‑derived moment). Training employs the Adam optimizer, mean‑squared‑error loss, 500 epochs, early stopping (patience = 10), and standardization of inputs.

Performance metrics show that the vGRF RF model attains a normalized root‑mean‑square error (NRMSE) of 5.09 % ± 0.61 % (intra‑subject) and 8.36 % ± 0.91 % (inter‑subject). Angle prediction errors range from 3 % to 5 % NRMSE, with the best configuration (model W6) using bilateral IMU data and GRF as an additional feature. Moment estimation yields intra‑subject NRMSEs between 1.63 % (hip rotation) and 2.96 % (hip adduction) with Pearson r > 0.97, while inter‑subject NRMSEs rise to 6.59 %–10.50 % and r ≈ 0.90–0.98, reflecting the increased difficulty of cross‑subject generalization.

A real‑time implementation runs on a 13th‑generation Intel® Core™ i7‑13700 desktop. The pipeline processes a 20‑second data window sequentially: vGRF prediction (5 ms), angle prediction (14 ms), and moment prediction (23 ms), resulting in a total latency of less than 23 ms from data acquisition to output. Real‑time predictions closely follow the offline average curves; Pearson correlation coefficients between real‑time and offline profiles exceed 0.90 for all variables, confirming the system’s suitability for biofeedback or assistive‑device control.

The study’s contributions are threefold: (1) a completely wearable solution that eliminates the need for optical motion capture and force plates, (2) a hybrid machine‑learning architecture that balances accuracy, speed, and interpretability (RF for angles/GRF, ResNet‑16 for moments), and (3) demonstration of sub‑25 ms end‑to‑end latency, enabling real‑time applications such as gait rehabilitation, exoskeleton control, or sports performance monitoring.

Limitations include the relatively low native IMU sampling rate (25 Hz), which may miss high‑frequency dynamics in faster gait, and the focus on vertical GRF only, omitting anterior‑posterior and mediolateral components. The participant pool consists of eight healthy young adults, so generalization to clinical populations, older adults, or individuals with gait pathologies remains to be validated. Additionally, the current implementation runs on a desktop PC; porting to embedded or mobile platforms will be necessary for truly portable use.

Future work should explore higher‑rate IMUs (≥100 Hz) to capture richer inertial signals, integrate multi‑axis GRF estimation, incorporate electromyography (EMG) for muscle‑force inference, and test the framework on diverse subject groups. Optimizing sensor placement and reducing the number of required IMUs without sacrificing accuracy could further improve wearability. Finally, embedding the pipeline into low‑power microcontrollers or edge‑AI devices would enable on‑body, battery‑operated real‑time gait analysis for clinical and consumer applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment