A tissue-informed deep learning-based method for positron range correction in preclinical 68Ga PET imaging

A tissue-informed deep learning-based method for positron range correction in preclinical 68Ga PET imaging
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Positron range (PR) limits spatial resolution and quantitative accuracy in PET imaging, particularly for high-energy positron-emitting radionuclides like 68Ga. We propose a deep learning method using 3D residual encoder-decoder convolutional neural networks (3D RED-CNNs), incorporating tissue-dependent anatomical information through a u-map-dependent loss function. Models were trained with realistic simulations and, using initial PET and CT data, generated positron range corrected images. We validated the models in simulations and real acquisitions. Three 3D RED-CNN architectures, Single-channel, Two-channel, and DualEncoder, were trained on simulated PET datasets and evaluated on synthetic and real PET acquisitions from 68Ga-FH and 68Ga-PSMA-617 mouse studies. Performance was compared to a standard Richardson-Lucy-based positron range correction (RL-PRC) method using metrics such as mean absolute error (MAE), structural similarity index (SSIM), contrast recovery (CR), and contrast-to-noise ratio (CNR). CNN-based methods achieved up to 19 percent SSIM improvement and 13 percent MAE reduction compared to RL-PRC. The Two-Channel model achieved the highest CR and CNR, recovering lung activity with 97 percent agreement to ground truth versus 77 percent for RL-PRC. Noise levels remained stable for CNN models (approximately 5.9 percent), while RL-PRC increased noise by 5.8 percent. In preclinical acquisitions, the Two-Channel model achieved the highest CNR across tissues while maintaining the lowest noise level (9.6 percent). Although no ground truth was available for real data, tumor delineation and spillover artifacts improved with the Two-Channel model. These findings highlight the potential of CNN-based PRC to enhance quantitative PET imaging, particularly for 68Ga. Future work will improve model generalization through domain adaptation and hybrid training strategies.


💡 Research Summary

This paper addresses the longstanding problem of positron range (PR) blurring in PET imaging with high‑energy radionuclides, focusing on ^68Ga. Because ^68Ga emits positrons with a mean energy of 836 keV—substantially higher than the 249 keV of ^18F—the resulting PR can exceed 27 mm in low‑density tissues such as lung, degrading spatial resolution and quantitative accuracy. Conventional PR correction (PRC) techniques (Fourier deconvolution, PSF modeling within the system response matrix, tissue‑specific kernels combined with Richardson‑Lucy deconvolution) either ignore tissue heterogeneity, amplify noise, or are computationally intensive.

The authors propose a deep‑learning solution built on three‑dimensional residual encoder‑decoder convolutional neural networks (3D RED‑CNNs). They introduce two major innovations: (1) a tissue‑dependent loss function that combines voxel‑wise mean absolute error (MAE) with a global mutual information (MI) term linking the predicted ^68Ga image to the attenuation (µ‑map) derived from CT. This forces the network to respect anatomical density information during training. (2) Three architectural strategies for incorporating the µ‑map: (i) Single‑Channel (PET only, µ‑map used only in the loss), (ii) Two‑Channel (PET and µ‑map stacked as two input channels), and (iii) DualEncoder (separate encoder for µ‑map whose features are merged later). All models replace the original 2‑D layers of RED‑CNN with 3‑D convolutions to capture volumetric PR effects.

Training data are generated from realistic digital phantoms (MOBY and Digimouse). Positron transport is simulated with an adapted PenEasy code that tracks each positron through heterogeneous tissues, using PenNuc for the energy spectrum. The resulting annihilation distributions are fed into MCGPU‑PET to produce sinograms for an Inv eon PET/CT scanner, which are reconstructed with a PSF‑3D‑OSEM algorithm (60 iterations, 1 subset) to yield paired ^68Ga and ^18F images together with µ‑maps. Fifteen phantoms (each with PET and µ‑map) are made publicly available. Real preclinical data consist of mouse studies with ^68Ga‑FH and ^68Ga‑PSMA‑617.

Performance is evaluated using MAE, structural similarity index (SSIM), contrast recovery (CR), contrast‑to‑noise ratio (CNR), and noise increase relative to the baseline. In simulation, all CNN variants outperform the standard Richardson‑Lucy PRC (RL‑PRC): SSIM improves up to 19 %, MAE drops up to 13 %, and noise remains stable (~5.9 % increase) whereas RL‑PRC adds ~5.8 % noise. The Two‑Channel model achieves the highest CR and CNR, recovering lung activity with 97 % agreement to ground truth (versus 77 % for RL‑PRC). In the real mouse scans, the Two‑Channel network again yields the best CNR (9.6 % noise) and visibly sharper tumor boundaries with reduced spill‑over artifacts, despite the absence of a quantitative ground truth.

The study acknowledges limitations: domain shift between simulated and real data, the need for systematic tuning of the loss‑weight λ, and the lack of validation on other high‑energy isotopes. Future work will explore domain‑adaptation techniques, hybrid training that mixes simulated and real scans, and extension to isotopes such as ^124I or ^89Zr.

In summary, the paper demonstrates that a 3D RED‑CNN equipped with a tissue‑aware loss and explicit µ‑map integration can correct positron range effects more effectively than conventional deconvolution methods. The Two‑Channel architecture, in particular, provides the most accurate and low‑noise reconstructions, offering a promising tool for preclinical and potentially clinical ^68Ga PET studies where quantitative fidelity and spatial resolution are critical.


Comments & Academic Discussion

Loading comments...

Leave a Comment