PQTNet: Pixel-wise Quantitative Thermography Neural Network for Estimating Defect Depth in Polylactic Acid Parts by Additive Manufacturing

PQTNet: Pixel-wise Quantitative Thermography Neural Network for Estimating Defect Depth in Polylactic Acid Parts by Additive Manufacturing
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Defect depth quantification in additively manufactured (AM) components remains a significant challenge for non-destructive testing (NDT). This study proposes a Pixel-wise Quantitative Thermography Neural Network (PQT-Net) to address this challenge for polylactic acid (PLA) parts. A key innovation is a novel data augmentation strategy that reconstructs thermal sequence data into two-dimensional stripe images, preserving the complete temporal evolution of heat diffusion for each pixel. The PQT-Net architecture incorporates a pre-trained EfficientNetV2-S backbone and a custom Residual Regression Head (RRH) with learnable parameters to refine outputs. Comparative experiments demonstrate the superiority of PQT-Net over other deep learning models, achieving a minimum Mean Absolute Error (MAE) of 0.0094 mm and a coefficient of determination (R) exceeding 99%. The high precision of PQT-Net underscores its potential for robust quantitative defect characterization in AM.


💡 Research Summary

The paper addresses the long‑standing challenge of quantitatively estimating internal defect depth in additively manufactured (AM) polymer parts using non‑destructive testing (NDT). The authors focus on polylactic acid (PLA), a widely used biodegradable filament, and propose a novel deep‑learning framework called Pixel‑wise Quantitative Thermography Neural Network (PQT‑Net).

Experimental setup – PLA specimens (90 × 90 × 5 mm) were printed with circular internal cavities of fixed radius (8 mm) and nine different depths ranging from 0.24 mm to 1.52 mm. Active pulsed thermography was performed using two 800 W halogen lamps (30 s pulse) while an infrared camera (640 × 480, 50 Hz, 35 mK sensitivity) recorded 1,000 frames over 220 s.

Data augmentation – The key innovation is a reconstruction step that converts each pixel’s temporal thermal response (1024 frames) into a two‑dimensional “stripe” image. Each column of the stripe corresponds to a time step, preserving the full evolution of heat diffusion for that spatial location. A logarithmic transformation and min‑max normalization are applied to enhance low‑contrast temperature differences. This approach yields a large pixel‑level dataset (1,773 samples, 197 per depth) while retaining the physics‑driven temporal information that traditional frame‑averaging methods discard.

Network architecture – PQT‑Net uses a pretrained EfficientNetV2‑S as the encoder backbone, chosen for its strong parameter efficiency and ability to capture fine‑grained features on small datasets. A squeeze‑and‑excitation (SE) block provides channel‑wise attention, emphasizing the most informative temporal channels. The extracted high‑level features are fed into a custom Residual Regression Head (RRH) composed of fully‑connected layers, ReLU activations, dropout, and residual connections that refine the depth prediction.

Training details – The model is trained end‑to‑end with AdamW (lr = 1e‑3, weight decay = 1e‑4) and a ReduceLROnPlateau scheduler that halves the learning rate after five epochs without validation loss improvement. A hybrid loss (weighted sum of L1 and L2) encourages both low absolute error and smooth convergence. Images are resized to 512 × 512, normalized (mean = 0.5, std = 0.5), and split 70 %/15 %/15 % for training/validation/testing. Gradient clipping (max norm = 1.0) stabilizes training.

Results – PQT‑Net is benchmarked against six state‑of‑the‑art models: ConvNeXt, Vision Transformer (ViT), EfficientNetV2‑Base, RegNet, ResNet, and a variational auto‑encoder (VAE). On the test set, PQT‑Net achieves RMSE = 2.08 × 10⁻², MAE = 9.40 µm, MAPE = 1.17 %, and R² = 0.997, outperforming all baselines (the next best R² = 0.996). Depth‑wise analysis shows that for deeper defects (≥ 0.88 mm) the model’s MAE drops to 3–22 µm, reflecting its superior ability to model the more complex temperature gradients in those regions. Shallow defects (≤ 0.56 mm) exhibit slightly higher errors, but the overall performance remains the best among the compared methods.

Discussion – The superior performance stems from three synergistic factors: (1) the stripe‑image representation preserves the full temporal dynamics of heat diffusion, providing a direct physical link between temperature evolution and defect depth; (2) EfficientNetV2‑S combined with SE attention efficiently extracts subtle temperature gradients even in low‑contrast PLA data; (3) the RRH’s residual connections fine‑tune the regression output, enabling continuous depth prediction rather than discrete classification.

Limitations and future work – The study is limited to a single material (PLA) and a simple circular cavity geometry. Extending the dataset to other polymers (ABS, PETG), composite materials, and more complex multi‑defect scenarios will test the generalizability of PQT‑Net. Real‑time deployment also requires reducing the number of captured frames or designing a lightweight version of the network without sacrificing accuracy.

Conclusion – PQT‑Net introduces a powerful, physics‑aware deep‑learning pipeline for quantitative thermography. By converting temporal thermal sequences into pixel‑wise stripe images and leveraging a high‑capacity backbone with attention and residual regression, the method achieves sub‑0.01 mm depth resolution and R² > 0.99 on PLA specimens. This represents a significant step toward reliable, automated NDT of additively manufactured parts and provides a foundation for broader application across materials and defect types.


Comments & Academic Discussion

Loading comments...

Leave a Comment