A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction based on Content/Style Modeling

A Plug-and-Play Method for Guided Multi-contrast MRI Reconstruction based on Content/Style Modeling
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Since multiple MRI contrasts of the same anatomy contain redundant information, one contrast can guide the reconstruction of an undersampled subsequent contrast. To this end, several end-to-end learning-based guided reconstruction methods have been proposed. However, a key challenge is the requirement of large paired training datasets comprising raw data and aligned reference images. We propose a modular two-stage approach that does not require any k-space training data, relying solely on image-domain datasets, a large part of which can be unpaired. Additionally, our approach provides an explanatory framework for the multi-contrast problem based on the shared and non-shared generative factors underlying two given contrasts. A content/style model of two-contrast image data is learned from a largely unpaired image-domain dataset and is subsequently applied as a plug-and-play operator in iterative reconstruction. The disentanglement of content and style allows explicit representation of contrast-independent and contrast-specific factors. Consequently, incorporating prior information into the reconstruction reduces to a simple replacement of the aliased content of the reconstruction iterate with high-quality content derived from the reference scan. Combining this component with a data consistency step and introducing a general corrective process for the content yields an iterative scheme. We name this novel approach PnP-CoSMo. Various aspects like interpretability and convergence are explored via simulations. Furthermore, its practicality is demonstrated on the public NYU fastMRI DICOM dataset, showing improved generalizability compared to end-to-end methods, and on two in-house multi-coil raw datasets, offering up to 32.6% more acceleration over learning-based non-guided reconstruction for a given SSIM.


💡 Research Summary

The paper introduces a novel two‑stage, plug‑and‑play (PnP) framework for guided reconstruction of multi‑contrast magnetic resonance imaging (MRI) that eliminates the need for paired k‑space training data. In the first stage, a content/style (C/S) generative model is learned from largely unpaired image‑domain datasets using a MUNIT‑style architecture. The model decomposes each contrast image into a shared “content” latent space that captures contrast‑independent anatomical structure, and a contrast‑specific “style” latent space that encodes the signal intensity and phase characteristics unique to each contrast. Training consists of a large unsupervised pre‑training phase on unpaired images followed by a lightweight fine‑tuning phase on a small set of aligned image pairs, still without any raw k‑space data.

In the second stage, the trained C/S model is embedded as a plug‑and‑play operator within an iterative reconstruction algorithm based on the Iterative Shrinkage‑Thresholding Algorithm (ISTA). At each iteration, the current image estimate is split into content and style components; the content component is replaced by high‑quality content extracted from a fully sampled reference scan, while the style component is updated from the central region of the undersampled k‑space (where contrast information concentrates). A data‑consistency (DC) step then enforces fidelity to the measured k‑space, and a “refine” block corrects residual mismatches between the reference‑derived content and the actual measurements. This loop constitutes the proposed PnP‑CoSMo (Plug‑and‑Play based on Content/Style Modeling) algorithm.

The authors provide theoretical insight into convergence, showing that the PnP operator satisfies non‑expansive conditions that guarantee fixed‑point convergence under standard ISTA assumptions. Empirically, they demonstrate that style estimation remains stable even at high acceleration factors because the style latent is predominantly determined by low‑frequency k‑space data.

Extensive experiments are conducted on the public NYU fastMRI DICOM dataset and on two in‑house multi‑coil raw datasets. Compared with state‑of‑the‑art end‑to‑end guided reconstruction methods (e.g., UNet‑based unrolled networks, MC‑Varnet), PnP‑CoSMo achieves higher SSIM and PSNR across a range of acceleration factors (2×–12×) and sampling patterns, and exhibits markedly better generalization when training and test domains differ. On the in‑house raw data, the method delivers up to 32.6 % higher acceleration for the same SSIM relative to learning‑based non‑guided reconstructions. A preliminary radiological reader study further indicates improved lesion detectability and diagnostic confidence with PnP‑CoSMo reconstructions.

Key contributions are: (1) Demonstrating that disentangled content/style representations can be learned from unpaired image data, with minimal paired fine‑tuning; (2) Defining a content‑consistency operator that directly injects high‑quality anatomical information from a reference scan into the reconstruction; (3) Integrating this operator into a modular PnP framework that retains interpretability and provable convergence; (4) Providing comprehensive empirical evidence of superior performance, robustness, and clinical relevance. The work opens avenues for extending content/style‑based priors to multi‑modal imaging (e.g., MRI‑CT, MRI‑PET) and for incorporating more sophisticated physics‑aware style models to push acceleration limits further.


Comments & Academic Discussion

Loading comments...

Leave a Comment