We proposed a deep learning-based method for single-heartbeat 4D cardiac CT reconstruction, where a single cardiac cycle was split into multiple phases for reconstruction. First, we pre-reconstruct each phase using the projection data from itself and the neighboring phases. The pre-reconstructions are fed into a supervised registration network to generate the deformation fields between different phases. The deformation fields are trained so that it can match the ground truth images from the corresponding phases. The deformation fields are then used in the FBP-and-wrap method for motion-compensated reconstruction, where a subsequent network is used to remove residual artifacts. The proposed method was validated with simulation data from 40 4D cardiac CT scans and demonstrated improved RMSE and SSIM and less blurring compared to FBP and PICCS.
The simplified reference tissue model (SRTM) can provide a robust estimation of binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP (so called parametric image) is much more valuable than region of interested (ROI) based estimation of BP, it is challenging to compute it due to limited signal-to-noise ratio (SNR) in dynamic PET data. To achieve reliable parametric imaging, temporal images are commonly low-pass filtered prior to the kinetic parameter estimation, which sacrifices the resolution significantly. In this project, we propose an innovative method, the residual simplified reference tissue model (R-SRTM), to calculate parametric image with high resolution. In phantom simulation, we demonstrate that the proposed method outperforms the conventional SRTM method.
Direct reconstruction methods have been developed to estimate parametric images directly from the measured sinogram by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, especially for low-dose scenarios, SNR and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we present a novel unsupervised deep learning method for direct Patlak reconstruction from low-dose dynamic PET. The training label is measured sinogram itself and the only requirement is the patients own anatomical prior image, which is readily available from PET/CT or PET/MR scans. Experiment evaluation based on a low-dose dynamic dataset shows that the proposed method can outperform Gaussian post-smoothing and anatomically-guided direct reconstruction using the kernel method.
Positron emission tomography (PET) images still suffer from low signal-to-noise ratio (SNR) due to various physical degradation factors. Recently deep neural networks (DNNs) have been successfully applied to medical image denoising tasks when large number of training pairs are available. Previously the deep image prior framework1 shows that individual information can be enough to train a denoising network, with noisy image itself as the training label. In this work, we propose to improve PET image quality by jointly employing population and individual information based on DNN. The population information was utilized by pre-training the network using a group of patients. The individual information was introduced during testing phase by fine-tuning the population-information-trained network. Unlike traditional DNN denoising, in this framework fine-tuning during testing phase is available as the noisy PET image itself was treated as the training label. Quantification results based on clinical PET/MR datasets containing thirty patients demonstrate that the proposed framework outperforms Gaussian, non-local mean and deep image prior denoising methods.
Dual energy computed tomography (DECT) usually uses 80kVp and 140kVp for patient scans. Due to high attenuation, the 80kVp image may become too noisy for reduced photon flux scenarios such as low-dose protocols or large-sized patients, further leading to unacceptable decomposed image quality. In this paper, we proposed a deep-neural-network-based reconstruction approach to compensate for the increased noise in low-dose DECT scan. The learned primal-dual network structure was used in this study, where the input and output of the network consisted of both low- and high-energy data. The network was trained on 30 patients who went through normal-dose chest DECT scans with simulated noises inserted into the raw data. It was further evaluated on another 10 patients undergoing half-dose chest DECT scans. Improved image quality close to the normal-dose scan was achieved and no significant bias was found on Hounsfield units (HU) values or iodine concentration.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the MAPEM algorithm, we propose a novel unrolled neural network framework for 3D PET image reconstruction. In this framework, the convolutional neural network is combined with the MAPEM update steps so that data consistency can be enforced. Both simulation and clinical datasets were used to evaluate the effectiveness of the proposed method. Quantification results show that our proposed MAPEM-Net method can outperform the neural network and Gaussian denoising methods.
Accurate delineation of gross tumor volume (GTV) is essential for head and neck cancer radiotherapy. Complexity of morphology and potential image artifacts usually cause inaccurate manual delineation and interobserver variability. Manual delineation is also time consuming. Motivated by the recent success of deep learning methods in natural and medical image processing, we propose an automatic GTV segmentation approach based on 3D-Unet to achieve automatic GTV delineation. One innovative feature of our proposed method is that PET/CT multi-modality images are integrated in the segmentation network. 175 patients are included in this study with manually drawn GTV by physicians. Based on results from 5-fold cross validation, our proposed method achieves a dice loss of 0.82±0.07 which is better than the model using PET image only (0.79±0.09). In conclusion, automatic GTV segmentation is successfully applied to head and neck cancer patients using deep learning network and multi-modality images, which brings unique benefits for radiation therapy planning.
Deep neural networks have attracted growing interests in medical image due to its success in computer vision tasks. One barrier for the application of deep neural networks is the need of large amounts of prior training pairs, which is not always feasible in clinical practice. Recently, the deep image prior framework shows that the convolutional neural network (CNN) can learn intrinsic structure information from the corrupted image. In this work, an iterative parametric reconstruction framework is proposed using deep neural network as constraint. The network does not need prior training pairs, but only the patient’s own CT image. The training is based on Logan plot derived from multi-bed-position dynamic positron emission tomography (PET) images using 68Ga-PRGD2 tracer. We formulated the estimation of the slope of Logan plot as a constraint optimization problem and solved it using the alternating direction method of multipliers (ADMM) algorithm. Quantification results based on real patient dataset shows that the proposed parametric reconstruction method is better than the Gaussian denoising and non-local mean denoising methods.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the expectation maximization (EM) algorithm, we propose an unrolled neural network framework for PET image reconstruction, named EMnet. An innovative feature of the proposed framework is that the deep neural network is combined with the EM update steps in a whole graph. Thus data consistency can act as a constraint during network training. Both simulation data and real data are used to evaluate the proposed method. Quantification results show that our proposed EMnet method can outperform the neural network denoising and Gaussian denoising methods.
In computed tomographic (CT) image reconstruction, image prior design and parameter tuning are important to improving the image reconstruction quality from noisy or undersampled projections. In recent years, the development of deep learning in medical image reconstruction made it possible to automatically find both suitable image priors and hyperparameters. By unrolling reconstruction algorithm to finite iterations and parameterizing prior functions and hyperparameters with deep artificial neural networks, all the parameters can be learned end-to-end to reduce the difference between reconstructed images and the training ground truth. Despite of its superior performance, the unrolling scheme suffers from huge memory consumption and computational cost in the training phase, made it hard to apply to 3 dimensional applications in CT, such as cone-beam CT, helical CT, tomosynthesis, etc. In this paper, we proposed a training-time computational-efficient cascaded neural network for CT image reconstruction, which had several sequentially trained cascades of networks for image quality improvement, connected with data fidelity correction steps. Each cascade was trained purely in the image domain, so that image patches could be utilized for training, which would significantly accelerate the training process and reduce memory consumption. The proposed method was fully scalable to 3D data with current hardware. Simulation of sparse-view sampling were done and demonstrated that the proposed method could achieve similar image quality compared to the state-of-the-art unrolled networks.
KEYWORDS: Tumors, Brain, Image segmentation, Diffusion, Neuroimaging, Modeling, Bayesian inference, Magnetic resonance imaging, In vivo imaging, Medical imaging
An accurate prediction of brain tumor progression is crucial for optimized treatment of the tumors. Gliomas are primarily treated by combining surgery, external beam radiotherapy, and chemotherapy. Among them, radiotherapy is a non-invasive and effective therapy, and an understanding of tumor growth will allow better therapy planning. In particular, estimating parameters associated with tumor growth, such as the diffusion coefficient and proliferation rate, is crucial to accurately characterize physiology of tumor growth and to develop predictive models of tumor infiltration and recurrence. Accurate parameter estimation, however, is a challenging task due to inaccurate tumor boundaries and the approximation of the tumor growth model. Here, we introduce a Bayesian framework for a subject-specific tumor growth model that estimates the tumor parameters effectively. This is achieved by using an improved elliptical slice sampling method based on an adaptive sample region. Experimental results on clinical data demonstrate that the proposed method provides a higher acceptance rate, while preserving the parameter estimation accuracy, compared with other state-of-the-art methods such as Metropolis-Hastings and elliptical slice sampling without any modification. Our approach has the potential to provide a method to individualize therapy, thereby offering an optimized treatment.
Attenuation correction is essential for quantitative reliability of positron emission tomography (PET) imaging. In time-of-flight (TOF) PET, attenuation sinogram can be determined up to a global constant from noiseless emission data due to the TOF PET data consistency condition. This provides the theoretical basis for jointly estimating both activity image and attenuation sinogram/image directly from TOF PET emission data. Multiple joint estimation methods, such as maximum likelihood activity and attenuation (MLAA) and maximum likelihood attenuation correction factor (MLACF), have already been shown that can produce improved reconstruction results in TOF cases. However, due to the nonconcavity of the joint log-likelihood function and Poisson noise presented in PET data, the iterative method still requires proper initialization and well-designed regularization to prevent convergence to local maxima. To address this issue, we propose a joint estimation of activity image and attenuation sinogram using the TOF PET data consistency condition as an attenuation sinogram filter, and then evaluate the performance of the proposed method using computer simulations.
X-ray luminescence computed tomography (XLCT) is an emerging hybrid imaging modality that can provide functional and anatomical images at the same time. Traditional narrow beam XLCT can achieve high spatial resolution as well as high sensitivity. However, by treating the CCD camera as a single pixel detector, this kind of scheme resembles the first generation of CT scanner which results in a long scanning time and a high radiation dose. Although cone beam or fan beam XLCT has the ability to mitigate this problem with an optical propagation model introduced, image quality is affected because the inverse problem is ill-conditioned. Much effort has been done to improve the image quality through hardware improvements or by developing new reconstruction techniques for XLCT. The objective of this work is to further enhance the already reconstructed image by introducing anatomical information through retrospective processing. The deblurring process used a spatially variant point spread function (PSF) model and a joint entropy based anatomical prior derived from a CT image acquired using the same XLCT system. A numerical experiment was conducted with a real mouse CT image from the Digimouse phantom used as the anatomical prior. The resultant images of bone and lung regions showed sharp edges and good consistency with the CT image. Activity error was reduced by 52.3% even for nanophosphor lesion size as small as 0.8mm.
Spectral computed tomography (SCT) generates better image quality than conventional computed tomography (CT). It has overcome several limitations for imaging atherosclerotic plaque. However, the literature evaluating the performance of SCT based on objective image assessment is very limited for the task of discriminating plaques. We developed a numerical-observer method and used it to assess performance on discrimination vulnerable-plaque features and compared the performance among multienergy CT (MECT), dual-energy CT (DECT), and conventional CT methods. Our numerical observer was designed to incorporate all spectral information and comprised two-processing stages. First, each energy-window domain was preprocessed by a set of localized channelized Hotelling observers (CHO). In this step, the spectral image in each energy bin was decorrelated using localized prewhitening and matched filtering with a set of Laguerre–Gaussian channel functions. Second, the series of the intermediate scores computed from all the CHOs were integrated by a Hotelling observer with an additional prewhitening and matched filter. The overall signal-to-noise ratio (SNR) and the area under the receiver operating characteristic curve (AUC) were obtained, yielding an overall discrimination performance metric. The performance of our new observer was evaluated for the particular binary classification task of differentiating between alternative plaque characterizations in carotid arteries. A clinically realistic model of signal variability was also included in our simulation of the discrimination tasks. The inclusion of signal variation is a key to applying the proposed observer method to spectral CT data. Hence, the task-based approaches based on the signal-known-exactly/background-known-exactly (SKE/BKE) framework and the clinical-relevant signal-known-statistically/background-known-exactly (SKS/BKE) framework were applied for analytical computation of figures of merit (FOM). Simulated data of a carotid-atherosclerosis patient were used to validate our methods. We used an extended cardiac-torso anthropomorphic digital phantom and three simulated plaque types (i.e., calcified plaque, fatty-mixed plaque, and iodine-mixed blood). The images were reconstructed using a standard filtered backprojection (FBP) algorithm for all the acquisition methods and were applied to perform two different discrimination tasks of: (1) calcified plaque versus fatty-mixed plaque and (2) calcified plaque versus iodine-mixed blood. MECT outperformed DECT and conventional CT systems for all cases of the SKE/BKE and SKS/BKE tasks (all p<0.01). On average of signal variability, MECT yielded the SNR improvements over other acquisition methods in the range of 46.8% to 65.3% (all p<0.01) for FBP-Ramp images and 53.2% to 67.7% (all p<0.01) for FBP-Hanning images for both identification tasks. This proposed numerical observer combined with our signal variability framework is promising for assessing material characterization obtained through the additional energy-dependent attenuation information of SCT. These methods can be further extended to other clinical tasks such as kidney or urinary stone identification applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.