Fine-Tuning Hybrid Physics-Informed Neural Networks for Vehicle Dynamics Model Estimation
Accurate dynamic modeling is critical for autonomous racing vehicles, especially during high-speed and agile maneuvers where precise motion prediction is essential for safety. Traditional parameter estimation methods face limitations such as reliance on initial guesses, labor-intensive fitting procedures, and complex testing setups. On the other hand, purely data-driven machine learning methods struggle to capture inherent physical constraints and typically require large datasets for optimal performance. To address these challenges, this paper introduces the Fine-Tuning Hybrid Dynamics (FTHD) method, which integrates supervised and unsupervised Physics-Informed Neural Networks (PINNs), combining physics-based modeling with data-driven techniques. FTHD fine-tunes a pre-trained Deep Dynamics Model (DDM) using a smaller training dataset, delivering superior performance compared to state-of-the-art methods such as the Deep Pacejka Model (DPM) and outperforming the original DDM. Furthermore, an Extended Kalman Filter (EKF) is embedded within FTHD (EKF-FTHD) to effectively manage noisy real-world data, ensuring accurate denoising while preserving the vehicle’s essential physical characteristics. The proposed FTHD framework is validated through scaled simulations using the BayesRace Physics-based Simulator and full-scale real-world experiments from the Indy Autonomous Challenge. Results demonstrate that the hybrid approach significantly improves parameter estimation accuracy, even with reduced data, and outperforms existing models. EKF-FTHD enhances robustness by denoising real-world data while maintaining physical insights, representing a notable advancement in vehicle dynamics modeling for high-speed autonomous racing.
💡 Research Summary
The paper addresses the critical need for accurate vehicle dynamics models in high‑speed autonomous racing, where precise motion prediction directly impacts safety and performance. Traditional parameter‑identification methods suffer from heavy reliance on good initial guesses, long fitting times, and the need for elaborate testing rigs, while pure data‑driven deep learning approaches require massive labeled datasets and often ignore the underlying physics, leading to physically implausible predictions. To bridge this gap, the authors propose a two‑stage hybrid framework called Fine‑Tuning Hybrid Dynamics (FTHD) and its extended version EKF‑FTHD.
In the first stage, a Deep Dynamics Model (DDM) is pre‑trained on abundant simulated data generated by the BayesRace physics‑based simulator. The DDM captures the nonlinear relationships inherent in the single‑track (bicycle) vehicle model, including Pacejka tire forces, drivetrain coefficients, rolling resistance, aerodynamic drag, and vehicle inertia. In the second stage, the pre‑trained network is fine‑tuned on a much smaller real‑world dataset. The fine‑tuning is “hybrid” because it combines a supervised loss (mean‑square error between predicted and measured state variables) with an unsupervised physics‑informed loss that penalizes violations of the governing differential equations. Crucially, a subset of network layers is frozen to preserve the knowledge learned from simulation, while at least one hidden layer remains trainable to adapt to the new data. This strategy dramatically reduces the amount of labeled data required while maintaining physical consistency.
Real‑world racing data, however, are corrupted by sensor noise, latency, and environmental disturbances. To mitigate these effects, the authors embed an Extended Kalman Filter (EKF) within the pipeline, creating EKF‑FTHD. The EKF processes the raw measurements, separating the underlying physical signal from stochastic noise and providing a denoised state estimate that serves as input to the PINN during fine‑tuning. By feeding cleaner data, the network converges faster, achieves lower validation loss, and yields more accurate estimates of the unknown parameters. The EKF also supplies covariance information that can be used for uncertainty quantification in downstream control tasks.
The vehicle model parameters are divided into two groups: (i) known quantities such as vehicle mass, wheelbase distances, and moment of inertia, and (ii) unknown parameters including the full set of Pacejka coefficients (B, C, D, E, horizontal and vertical shifts) and drivetrain constants (Cm1, Cm2, Cr0, Cd). The unknowns are constrained within physically plausible bounds, and these bounds are enforced as penalty terms in the loss function to prevent unrealistic estimates.
Experimental validation proceeds in two parts. First, scaled‑vehicle simulations in BayesRace demonstrate that FTHD outperforms the state‑of‑the‑art Deep Pacejka Model (DPM) and the original DDM. With only half the training data, FTHD reduces the mean absolute error of estimated parameters by more than 30 % and achieves comparable or better trajectory prediction. Second, full‑scale experiments from the Indy Autonomous Challenge are used to test EKF‑FTHD on noisy real sensor streams. After EKF preprocessing, the hybrid PINN converges to parameter estimates with an average error of 0.12 % and a loss reduction of 45 % relative to a Savitzky‑Golay filtered baseline. Moreover, EKF‑FTHD maintains stable training dynamics, avoiding the oscillations and divergence observed when training the original DDM on raw noisy data.
The authors conclude that (1) fine‑tuning a physics‑informed neural network is an effective way to leverage large simulated datasets while adapting to limited real data, (2) embedding an EKF provides a principled method for denoising and uncertainty handling, and (3) the combined FTHD/EKF‑FTHD framework delivers superior parameter estimation accuracy and robustness for high‑speed autonomous racing vehicles. The paper suggests future work on real‑time integration of EKF‑FTHD within model‑predictive control loops and extension to multi‑vehicle cooperative scenarios, indicating broad applicability beyond racing to any domain requiring accurate nonlinear dynamic models under data‑scarcity and measurement noise.
Comments & Academic Discussion
Loading comments...
Leave a Comment