A Data-Driven Method for INS/DVL Alignment
Autonomous underwater vehicles (AUVs) are sophisticated robotic platforms crucial for a wide range of applications. The accuracy of AUV navigation systems is critical to their success. Inertial sensors and Doppler velocity logs (DVL) fusion is a promising solution for long-range underwater navigation. However, the effectiveness of this fusion depends heavily on an accurate alignment between the inertial sensors and the DVL. While current alignment methods show promise, there remains significant room for improvement in terms of accuracy, convergence time, and alignment trajectory efficiency. In this research we propose an end-to-end deep learning framework for the alignment process. By leveraging deep-learning capabilities, such as noise reduction and capture of nonlinearities in the data, we show using simulative data, that our proposed approach enhances both alignment accuracy and reduces convergence time beyond current model-based methods.
💡 Research Summary
The paper addresses the critical problem of aligning the inertial navigation system (INS) and Doppler velocity log (DVL) frames on autonomous underwater vehicles (AUVs). Accurate alignment is essential because even small mis‑alignments can cause large navigation errors over time, especially when GNSS signals are unavailable underwater. Traditional alignment techniques rely on prescribed motion patterns, external acoustic beacons, or singular‑value‑decomposition (SVD) applied to integrated acceleration data. These methods often require specific trajectories, long convergence periods, and are sensitive to sensor noise, limiting their practicality for rapid pre‑mission preparation and in‑mission re‑calibration.
To overcome these limitations, the authors propose a data‑driven approach named AlignNet. AlignNet is a one‑dimensional convolutional neural network (1‑D CNN) that takes synchronized velocity measurements from the IMU (accelerometer and gyroscope) and the DVL as a six‑dimensional time‑series input. The network architecture consists of three convolutional blocks with 64, 128, and 256 filters respectively, each followed by ReLU activation and batch normalization. After the convolutional backbone, a global average pooling layer collapses the temporal dimension, and two fully connected layers (the first with 512 neurons) produce three outputs corresponding to the roll, pitch, and yaw Euler angles that define the rotation matrix (R_{db}) between the body frame and the DVL frame. The model is trained using mean‑squared‑error (MSE) loss on the three angles, the Adam optimizer (initial learning rate (10^{-7})), a batch size of 32, early stopping with a patience of 15 epochs, and learning‑rate decay by a factor of 0.5 when validation loss plateaus.
For evaluation, a comprehensive MATLAB‑Simulink simulation environment is built. The simulator includes a 6‑DOF AUV dynamics model, realistic hydrodynamic forces, and detailed error models for both sensors (scale factors, biases, Gaussian white noise). The DVL error model follows the standard formulation ( \tilde y =
Comments & Academic Discussion
Loading comments...
Leave a Comment