Joint Reconstruction of MR-PET Data

Joint Reconstruction of MR-PET Data

While Positron Emission Tomography (PET) has immense functional and quantitative capabilities, it does not provide high-resolution images. This limitation is overcome by integrating PET with other imaging modalities such as CT or MRI. The development of MR-PET scanners allows for simultaneous acquisitions of PET and MR data. However, the images are reconstructed separately and often superimposed onto each other. True joint-reconstruction of both MR-PET may be able to improve the resolution and image quality of both data sets. Although MR and PET each provide unique and independent information, they share the same underlying anatomical features. Therefore, rather than treating the data from PET and MR separately, we are able to incorporate both data sets in a simultaneous reconstruction algorithm. A joint sparsity based reconstruction method for multiple sensors, allows these anatomical similarities to improve the two unique and independent data sets [2].

The proposed method reconstructs four-dimensional data by treating the two imaging modalities as additional dimensions of a single dataset, solving the following optimization problem:

 

 

In this equation xMR and xPET represent the 3D image data sets, while k and f are the measured data corresponding to the MR k-space and PET sinogram. The operator E, which includes multiplication by coil sensitivities, maps data between MR images and k space. Ψ transforms data to a sparse image domain, λi and λMR are regularization parameters, while i tracks voxel indicies. The function EM regenerates the PET image from the sinogram. It includes A, the PET projection operator, n, the EM iteration number, and N, which corrects for geometrical effects [1].

The joint sparsity term is responsible for the exchange of anatomical information between PET and MR datasets, and is defined as:

 

 

In addition to the joint sparsity term, an individual sparsity term is included for the MR dataset in order to remove undersampling artifacts. 

We propose a spatially dependent regularization given by the difference of the signal intensities in each voxel i: di=| |Ψ(xiMR)|- |Ψ(xiPET)| |. This allows us to avoid incorporating non-shared information from one modality to the other. The regularization parameters are then scaled down according to li=l/di, which ensures that joint information is shared only between the two modalities in areas where the imaged signals match. 

 

Example: Conventional vs Joint Reconstruction 

 

Sponsors

Latest Updates

Philanthropic Support

We gratefully acknowledge generous support for radiology research at NYU Langone Medical Center from:
 
• The Big George Foundation
• Raymond and Beverly Sackler
• Bernard and Irene Schwartz

Go to top