From Global Radiomics to Parametric Maps: A Unified Workflow Fusing Radiomics and Deep Learning for PDAC Detection
Radiomics and deep learning both offer powerful tools for quantitative medical imaging, but most existing fusion approaches only leverage global radiomic features and overlook the complementary value of spatially resolved radiomic parametric maps. We…
Authors: Zengtian Deng, Yimeng He, Yu Shi
FR OM GLOB AL RADIOMICS TO P ARAMETRIC MAPS: A UNIFIED WORKFLO W FUSING RADIOMICS AND DEEP LEARNING FOR PD A C DETECTION Zengtian Deng ⋆ † , Y imeng He ⋆ , Y u Shi ⋆ † , Lixia W ang ⋆ , T ouseef Ahmad Qur eshi ⋆ , Xiuzhen Huang ⋆ , Debiao Li ⋆ † ⋆ Cedars-Sinai Medical Center Los Angeles, CA, USA † Uni versity of California, Los Angeles Los Angeles, CA, USA Equal contribution (co-first): Zengtian Deng; Y imeng He. Corresponding author: Debiao Li. © This work has been submitted to the IEEE for possi- ble publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ABSTRA CT Radiomics and deep learning both offer po werful tools for quantitative medical imaging, but most existing fusion approaches only lev erage global radiomic features and over - look the complementary value of spatially resolved radiomic parametric maps. W e propose a unified framework that first selects discriminati ve radiomic features and then injects them into a radiomics-enhanced nnUNet at both the global and vox el lev els for pancreatic ductal adenocarcinoma (PD A C) detection. On the P ANORAMA dataset, our method achiev ed A UC = 0.96 and AP = 0.84 in cross-validation. On an exter - nal in-house cohort, it achieved A UC = 0.95 and AP = 0.78, outperforming the baseline nnUNet; it also ranked second in the P ANORAMA Grand Challenge. This demonstrates that handcrafted radiomics, when injected at both global and vox el lev els, provide complementary signals to deep learn- ing models for PD A C detection. Our code can be found at https://github .com/briandzt/dl-pdac-radiomics-global-n- paramaps. Index T erms — radiomics, deep learning, parametric maps, nnUNet, pancreatic ductal adenocarcinoma (PD A C), contrast-enhanced CT , feature fusion 1. INTR ODUCTION Radiomics provides a quantitati ve framew ork for medical image analysis using handcrafted intensity , texture, and shape descriptors with strong interpretability and reproducibility [1, 2, 3, 4]. Although deep learning often achiev es su- perior end-to-end performance, purely data-dri ven models can be less interpretable and vulnerable to scanner and site specific domain shift [2, 5]. Accordingly , recent work has explored radiomics–deep learning fusion to combine hand- crafted priors with learned representations [6]. Howe ver , most methods rely on global radiomic vectors and under- utilize spatially resolved radiomics parametric maps [7, 8]. Existing v oxel-wise approaches either compute e xhaustiv e maps for man y descriptors (high computational cost) or com- press maps via singular-v alue decomposition (SVD), which can reduce feature-lev el interpretability [9, 10]. In this work, we present a novel application of radiomics features in deep learning framew ork for pancreatic ductal ade- nocarcinoma (PDA C) detection that jointly lev erages global and vox el-lev el radiomic features. Our contributions are: • Unified radiomics–DL workflow . W e first identify dis- criminativ e radiomic features through global analysis, then inject the selected features into deep learning as both case-level vectors and voxel-wise parametric maps, bridging global biomarkers with spatial cues for PD A C detection. • Radiomics-enhanced nnUNet. W e augment nnUNet by concatenating selected parametric maps with CT input and fusing global radiomics at the bottleneck via radiomics- aware cross-attention to impro ve lesion sensitivity and ro- bustness. • CUDA-accelerated parametric-map extraction. W e implement a GPU-based v oxel-le vel radiomics e xtrac- tor using PyT orchRadiomics [11], substantially reduc- ing per-feature extraction time and enabling large-scale parametric-map generation. 2. RELA TED WORK 2.1. Radiomics-Enhanced Deep Learning Radiomics provides a quantitative bridge between medical imaging and phenotype characterization [1, 2, 3, 4]. Semi- nal work by Aerts et al. (2014) and Lambin et al. (2017) es- tablished that handcrafted radiomic descriptors capture tumor heterogeneity and predict clinically relev ant outcomes [1, 2]. Recent studies hav e explored integrating information, in- cluding radiomics, into deep learning (DL). Li et al. (2024) categorized information fusion in classification into input, output, and intermediate paradigms, and observed that most works follo w single-layer intermediate fusion, which fused information after deep learning feature extraction but be- fore final decision layer[12]. W ithin the scope of radiomics, both intermediate fusion and output fusion improved diag- nosis and staging performance across modalities and cohorts [13, 14, 15]. Ho wev er , existing methods still treat radiomics as a global descriptor and typically fuse it at output, with limited mechanisms to (i) link global biomarkers to spatially resolved cues and (ii) pinpoint radiomic features critical for the problem. Our approach addresses these limitations by using global radiomics as a selection and supervision signal : discrimi- nativ e global radiomic biomarkers are first identified, and only this compact subset is then integrated into the network through both global radiomics embedding and regional voxel feature maps as additional input channel. This dual-level, selection-linked design enables radiomics to function as plug- in “feature adapters” for standard segmentation backbones, rather than a one-of f single-le vel fusion of global descriptors. 2.2. Application of Radiomics P arametric Maps Radiomics parametric maps (voxel-wise radiomics) have emerged as a way to spatially resolve handcrafted descriptors and interpret them as images [7, 8, 16]. Kim et al. (2021) introduced accessible tooling for generating feature-specific maps, enabling radiomic features to be visualized and quan- tified locally [7]. Jensen et al. (2023) showed that map-first computation can improv e robustness to varying region-of- interest (ROI) definitions, and Lin et al. (2024) extended vox el-lev el radiomics to 3D texture similarity networks to capture subject-lev el phenotypes from spatial feature distri- butions [8, 16]. Despite these advances, existing parametric-map studies are predominantly used for visualization or post hoc aggrega- tion, and rarely serve as structured inputs to deep neural net- works. Moreover , when maps are used, prior work typically lacks an explicit link to conv entional global radiomics, leav- ing unclear which feature maps are most relev ant and linked to learned global representations. In contrast, we explicitly bridge global and vox el-wise radiomics: global radiomics as guided clue to pinpoint a com- pact subset of discriminativ e features, and only these selected features are materialized as parametric maps for prior infor- mation. T ogether with global radiomics embeddings fused via latent cross-attention, this yields a computationally feasi- ble and interpretable mechanism for incorporating radiomics into se gmentation backbones, aligning global biomarkers with their spatial realizations for lesion-aware detection. 2.3. Deep Learning Based Pancr eatic Cancer Detection Deep learning has rapidly advanced computer-aided pan- creatic cancer detection and segmentation. Large-scale efforts such as the P AND A study (Cao et al., 2023) and nationwide v alidation by Chen et al. (2023) established high-performance CT -based models for PD A C screening, though most remain purely deep learning driv en[17, 18]. nnUNet-based pipelines hav e become the de facto standard for pancreas and lesion segmentation, with subsequent work improving data scale and ev aluation consistency [19]. Ho w- ev er , these image-only systems still face challenges in lesion sensitivity and cross-scanner v ariability . Our study addresses these gaps by injecting global ra- diomics analysis through radiomics-aware attention and vox el-lev el parametric mapping as additional channels to the UNet backbone, enhancing PDA C lesion detection accuracy and robustness. Stage II Stage I Step 1 Global radiomics analysis & feature selection Step 2 T wo stage nnUNet with global radiomics & parametric maps CT scan & mask Radiomics PDAC/no-PDAC Pancreas Classification log-sigma-4-0-mm_glcm_Correlation original_shape_Sphericity original_shape_SurfaceV olumeRatio … Radiomics features from best model Low-Res CT Low-Res Mask Low-Res nnUNet Cropped High-Res CT Radiomics Parametric Maps Global Radiomics Output Segmentation Multi-head Cross Attention concat Fig. 1 . (Step 1) Whole-pancreas radiomics are extracted for PDA C classification, and the most informativ e features are selected. (Step 2) A coarse-to-fine two-stage network: Stage I predicts a rough mask to define the ROI and compute global/local radiomics; Stage II concatenates radiomics para- metric maps with CT and fuses global radiomics via latent multi-head cross-attention to produce the final PD A C seg- mentation/detection. 3. METHOD W e proposed a radiomics-aw are PDA C detection pipeline that combined offline feature discov ery with a two-stage detector . First, we performed radiomics analysis over the whole pan- creas to select discriminative features with case-level PD A C detection(§3.2). Then, we train a two-stage nnUNet: Stage 1 localizes the pancreas on lo w-resolution CT , and Stage 2 oper- ates on the cropped high-resolution R OI for fine detection. In addition, Stage 2 network fused the selected radiomics in tw o forms — parametric maps by channel-wise concatenation at the input, and the global radiomics vector by cross-attention at the bottleneck (§3.3). This yields a unified model that ex- ploits both voxel-le vel radiomics cues and case-lev el global descriptors for PD A C detection. 3.1. Data Preparation W e used the training dataset from the P ANORAMA Chal- lenge, which contains 2,238 venous-phase contrast-enhanced CT scans (1,562 non-PDA C and 676 PDA C). In addition, we curated an internal dataset from Cedars-Sinai Medical Cen- ter for external validation, consisting of 218 venous-phase contrast-enhanced CT scans (113 non-PDA C and 105 PDA C). Follo wing Liu et al. (2025), we performed 5-fold stratified cross-validation based on lesion size (in voxels), using a bin size of 500 vox els to balance PD A C cases across folds [15]. 3.2. Global Radiomics Analysis Global Radiomics Analysis W e first performed a global ra- diomics analysis to identify discriminati ve case-lev el features that separated PD A C from non-PDA C scans. Using whole pancreas region including duct as the mask, W e extracted 1,486 radiomics features (original image and variants includ- ing LoG, wavelet, etc.) with PyRadiomics [9]. The fea- tures spanned first-order category and common texture fami- lies (e.g., GLCM, GLRLM, etc.) and shape. All features were z-score standardized using statistics computed on the training folds only . T o eliminate redundant features, we adopted a two-stage pipeline: (i) a uni variate filter that retained features with significant two-sided Pearson correlation between groups under FDR control, followed by (ii) recursiv e feature elimi- nation with an SVM on the retained set to filter features until only 10 features remained (e.g., GLCM Correlation, GLCM IMC1, NGTDM Strength, Shape Sphericity , Surface-V olume Ratio). Selection performance was estimated with 10-fold cross-validation on the training data, and the feature set from the best CV fold was chosen as the selected global radiomics features. From the selected 10 features, we excluded shape- based descriptors and generated vox el-wise parametric maps for the remaining 8 features with PyT orchRadiomics using a sliding-window k ernel of 5 vox els for downstream use. 3.3. T wo-stage nnUNet f or PD A C detection W e adopted a two-stage, coarse-to-fine PD A C detection pipeline. At a high lev el, the first network localized the pancreas on a downsampled CT , and the second network focused on the cropped high-resolution region to detect the lesion and related structures with the help of the selected global radiomics features and their corresponding parametric maps. Stage 1 (Coarse Localization) W e trained an nnUNet on low-resolution volumes (4.5 × 4.5 × 9.0 mm) to obtain a pancreas prediction. At inference, this prediction was used to define a fixed ROI of 100 × 50 × 15 mm on the original CT volume. Stage 2 (Fine Segmentation/Detection) W e then trained a second nnUNet on the cropped full-resolution R OI. T o make the detector more anatomically aware, we trained the model to jointly predict multiple structures — pancreas, abdominal aorta, portal vein, pancreatic duct, and common bile duct — rather than PD A C alone. This multi-structure supervision en- couraged the model to learn the surrounding anatomy and improv ed robustness and generalization. In this stage, we also injected radiomics in two ways: (i) the voxel-wise ra- diomics parametric maps were fused by channel-wise con- catenation with the CT at the input, and (ii) the case-lev el global radiomics vector w as fused at the bottleneck via multi- head cross-attention, using the global radiomics vector as the query and the latent nnUNet features as ke ys and values. This combination allowed the network to use both local radiomics cues and global-level descriptors to impro ve detection perfor- mance. Both stages were trained with a combination of Dice loss and normalized cross-entropy . For deployment realism, both the global radiomics and the parametric maps used by Stage 2 were computed from the Stage 1 pancreas prediction rather than ground-truth masks, during both training and inference. T o make vox el-wise radiomics practical for all cases, we generated the parametric maps with PyT orchRadiomics, which provided GPU-accelerated sliding-window extraction; the runtime comparison with vanilla PyRadiomics is reported in §4.3 [11, 9]. 4. EXPERIMENT AL RESUL TS W e conducted three sets of experiments. First, we ev aluated the global radiomics module from §3.2 to confirm that the se- lected case-le vel features are individually discriminative for PD A C vs. non-PDA C (§4.1). Second, we assessed the full radiomics-aware two-stage nnUNet on both the P ANORAMA dataset and our internal Cedars-Sinai cohort, and we per- formed ablations to isolate the contributions of global fea- tures and voxel-wise parametric maps (§4.2). Finally , we measured the runtime of the proposed PyT orchRadiomics implementation to show that vox el-wise radiomics extraction can be made practical for large-scale studies (§4.3). 4.1. Global Radiomics-Based PD A C Classification Using the global radiomics features selected described in §3.2, we trained an SVM to classify PD A C vs. non-PD A C on the P ANORAMA training set. Ev aluated with 10-fold cross-validation, the radiomics-only model achieved A UR OC = 0.824 ± 0.035 and AP = 0.667 ± 0.046, indicating that the selected feature set carries meaningful discriminative signal. W e therefore fixed this feature set and used it for the Stage-2 nnUNet. T able 1 . 5-fold cross v alidated PD A C detection on P ANORAMA dataset Methods A UROC AP (A UR OC+AP)/2 nnUNet(baseline) 0 . 959 0 . 810 0 . 885 + Global Features 0 . 957 0 . 813 0 . 885 + Parametric Maps 0 . 960 0 . 826 0 . 893 + Both 0 . 958 0.836 0.897 T able 2 . PD A C detection on external test set Methods A UROC AP (A UR OC+AP)/2 nnUNet(baseline) 0.954 0 . 662 0 . 808 + Global Features 0 . 948 0 . 715 0 . 832 + Parametric Maps 0 . 947 0 . 743 0 . 845 + Both 0 . 951 0.777 ∗ 0.864 ∗ * p-value < 0.01 when compared to baseline nnUNet. 4.2. PD A C Detection W e ev aluated the proposed workflow on both the P ANORAMA training dataset (5-fold cross-validation) and our external co- hort. W e used picai-ev al to ensure consistent lesion-level A verage Precision(AP) and case-le vel A UROC metrics ex- traction across datasets [20]. T o quantify the contribution of each radiomics component, we also performed ablation experiments in which we added only the global radiomics vector or only the vox el-wise parametric maps to the second- stage PD A C detector during training. Finally , we report our P ANORAMA test-set result on the challenge leaderboard to compare with other participating teams. As shown in T able 1 (P ANORAMA) and T able 2 (Ex- ternal), combining global radiomics and parametric maps yielded best performance on both the cross-v alidated train- ing data and the external test set. On the in-house cohort, (a) Case-wise R OC curve (b) Lesion-wise PR curve Fig. 2 . R OC and PR curves for in-house external dataset PD A C detection.Our model with combined global and lo- cal radiomics features achiev es better performance (0.662 -¿ 0.777, p¡0.01) than the baseline nnUNet model. the combined model also achiev ed a statistically significant improv ement over the baseline (p = 0.002). In addition, based on both T able 1 and T able 2, the model with parametric maps only showed better performance than the model with only global radiomics, which was likely due to the noisy global information due to imperfect segmentation from Stage 1 nnUNet. For the P ANORAMA test set e valuation, our preliminary model was trained with only three radiomics parametric maps without global radiomics fusion, b ut we still achiev ed 2nd place of ov erall ranking. 4.3. Runtime of V oxel-Wise Radiomics T o assess the practicality of voxel-wise radiomics at scale, we measured the extraction time for the 8 selected paramet- ric maps on 50 cases using (i) vanilla PyRadiomics and (ii) our GPU-based PyT orchRadiomics implementation [9, 11]. Based on the experiment, PyT orchRadiomics reduced the mean extraction time from 53.42s to 16.28s per feature, a substantial speed-up with p ≪ 0 . 01 , making per-case map generation feasible for large cohort. 5. CONCLUSION In this work, we proposed a unified framework that first performed global radiomics analysis to identify discrimi- nativ e features, then a two-stage nnUNet which integrated both selected global radiomics features and their respective parametric maps. W e applied our frame work to Pancreatic Adenocarcinoma (PDA C) detection task and achieved bet- ter performance than baseline nnUNet and ranks second in the P ANORAMA Grand Challenge [21]. Our work showed that con ventional radiomics and parametric maps together can serve as complementary prior to improv e robustness in downstream deep learning analysis, and by including gpu-accelerated radiomics parametric map extraction, our proposed workflo w can be more scalable and efficient for real-world applications. 6. A CKNO WLEDGMENTS This work was supported by the National Institutes of Health (NIH) under grant R01 CA260955. 7. COMPLIANCE WITH ETHICAL ST ANDARDS The internal dataset of this study was retrospectiv ely collected and de-identified at Cedars-Sinai Medical Center under an In- stitutional Revie w Board (IRB)–approv ed protocol and ana- lyzed under a wai ver of informed consent; The public dataset was obtained from the P ANORAMA Challenge, which pro- vides fully de-identified imaging data under an open-access license [21]. 8. REFERENCES [1] Hugo JWL Aerts et al., “Decoding tumour phenotype by noninv asi ve imaging using a quantitati ve radiomics approach, ” Nature communications , vol. 5, no. 1, pp. 4006, 2014. [2] Philippe Lambin et al., “Radiomics: the bridge between medical imaging and personalized medicine, ” Natur e r evie ws Clinical oncology , vol. 14, no. 12, pp. 749–762, 2017. [3] Robert J Gillies, Paul E Kinahan, and Hedvig Hricak, “Radiomics: images are more than pictures, they are data, ” Radiology , vol. 278, no. 2, pp. 563–577, 2016. [4] Chintan Parmar , Patrick Grossmann, Johan Bussink, Philippe Lambin, and Hugo JWL Aerts, “Machine learning methods for quantitativ e radiomic biomarkers, ” Scientific r eports , vol. 5, no. 1, pp. 13087, 2015. [5] Oz Kilim, Alex Olar , T am ´ as Jo ´ o, T am ´ as Palicz, P ´ eter Pollner , and Istv ´ an Csabai, “Physical imaging parameter variation driv es domain shift, ” Scientific Reports , vol. 12, no. 1, pp. 21302, 2022. [6] Zhiheng Li et al., “Comparison of clinical, radiomics, deep learning, and fusion models for predicting early recurrence in locally advanced rectal cancer based on multiparametric mri: a multicenter study , ” European Journal of Radiolo gy , p. 112173, 2025. [7] Damon Kim, Laura J Jensen, Thomas Elgeti, Ingo G Steffen, Bernd Hamm, and Sebastian N Nagel, “Ra- diomics for e veryone: a ne w tool simplifies creating parametric maps for the visualization and quantification of radiomics features, ” T omography , v ol. 7, no. 3, pp. 477–487, 2021. [8] Laura Jacqueline Jensen et al., “The role of paramet- ric feature maps to correct different volume of interest sizes: an in viv o liver mri study , ” Eur opean Radiology Experimental , vol. 7, no. 1, pp. 48, 2023. [9] Joost JM V an Griethuysen et al., “Computational ra- diomics system to decode the radiographic phenotype, ” Cancer r esear ch , v ol. 77, no. 21, pp. e104–e107, 2017. [10] Y ang Chen et al., “ A radiomics-incorporated deep en- semble learning model for multi-parametric mri-based glioma segmentation, ” Physics in Medicine & Biology , vol. 68, no. 18, pp. 185025, 2023. [11] Y inhao Liang et al., “Localized intra-and inter- tumoral heterogeneity for predicting treatment response to neoadjuvant chemotherapy in breast cancer , ” IEEE Journal of Biomedical and Health Informatics , 2025. [12] Y ihao Li et al., “ A revie w of deep learning-based infor- mation fusion techniques for multimodal medical image classification, ” Computers in Biology and Medicine , v ol. 177, pp. 108635, 2024. [13] Guoxiu Lu et al., “Deep learning radiomics based on multimodal imaging for distinguishing benign and ma- lignant breast tumours, ” F r ontiers in Medicine , vol. 11, pp. 1402967, 2024. [14] W eimin Cai, Xiao W u, Kun Guo, Y ongxian Chen, Y ubo Shi, and Xinran Lin, “Deep-learning, radiomics and clinic based fusion models for predicting response to in- fliximab in crohn’ s disease patients: A multicentre, ret- rospectiv e study , ” Journal of Inflammation Resear ch , pp. 7639–7651, 2024. [15] Han Liu, Riqiang Gao, Eileen Krieg, and Sasa Grbic, “Pandx: Ai-assisted early detection of pancreatic duc- tal adenocarcinoma on contrast-enhanced ct, ” in In- ternational W orkshop on Applications of Medical AI . Springer , 2025, pp. 63–71. [16] Liyuan Lin et al., “V oxel-based texture simi- larity networks re veal indi vidual variability and correlate with biological ontologies, ” Neu- r oImag e , vol. 297, pp. 120688, 2024, doi: https://doi.org/10.1016/j.neuroimage.2024.120688. [17] Kai Cao et al., “Large-scale pancreatic cancer detection via non-contrast ct and deep learning, ” Natur e medicine , vol. 29, no. 12, pp. 3033–3043, 2023. [18] Po-Ting Chen et al., “P ancreatic cancer detection on ct scans with deep learning: a nationwide population- based study , ” Radiology , vol. 306, no. 1, pp. 172–182, 2023. [19] Ehwa Y ang et al., “nnu-net-based pancreas segmenta- tion and volume measurement on ct imaging in patients with pancreatic cancer, ” Academic Radiology , vol. 31, no. 7, pp. 2784–2794, 2024. [20] Anindo Saha et al., “ Artificial intelligence and radiolo- gists in prostate cancer detection on mri (pi-cai): an in- ternational, paired, non-inferiority , confirmatory study , ” The Lancet Oncolo gy , v ol. 25, no. 7, pp. 879–887, 2024. [21] Natalia Alves et al., “ Artificial intelligence and radi- ologists in pancreatic cancer detection using standard of care ct scans (panorama): an international, paired, non-inferiority , confirmatory , observational study , ” The Lancet Oncology , vol. 27, no. 1, pp. 116–124, Jan. 2026, Epub 2025-11-20, doi: 10.1016/S1470-2045(25)00567- 4.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment