Deep learning-based image denoising and reconstruction methods have shown promising results for low-dose CT. When high-quality reference images are not available for training the network, researchers found a powerful and effective counterpart called Noise2Noise, which trains the neural network using paired data with independent noise. However, it is uncommon to have paired CT scans with independent noise (e.g., from two scans). In this paper, a method is proposed to generate such paired data for potential usage in deep learning training by simultaneously simulating a low-dose image at arbitrary dose level and an image with independent noise from a single CT scan. Their independence is investigated both analytically and numerically. In our numerical study, a Shepp-Logan phantom was utilized in MATLAB to generate the ground-truth, normal-dose, and low-dose images for reference. Noise images were obtained for analysis by subtracting the ground-truth from the noisy images, including the normal-dose/low-dose images and the paired products of our proposed method. Our numerical study matches the analytical results very well, showing that the paired images are not correlated. Under an additional assumption that they form a bivariate normal distribution, they are also independent. The proposed method can produce a series of paired images at arbitrary dose level given one CT scan, which provides a powerful new method to enrich the diversity of low-dose data for deep learning.
The maximum likelihood (ML) principle has been a gold standard for estimating basis line-integrals due to the optimal statistical property. However, the estimates are sensitive to noise from large attenuations or low dose levels. One may apply filtering in the estimated basis sinograms or model-based iterative reconstruction. Both methods effectively reduce noise, but the degraded spatial resolution is a concern. In this study, we propose a likelihood-based bilateral filter (LBF) for the estimated basis sinograms to reduce noise while preserving spatial resolution. It is a post-processing filtration applied to the ML-based basis line-integrals, the estimates with a high noise level but minimal degradation of spatial resolution. The proposed filter considers likelihood in neighbours instead of weighting by pixel values as in the original bilateral filtration. Two-material decomposition (water and bone) results demonstrate that the proposed method shows improved noise-to-spatial resolution tendency compared to conventional methods.
Liver vessel segmentation is important in diagnosing and treating liver diseases. Iodine-based contrast agents are typically used to improve liver vessel segmentation by enhancing vascular structure contrast. However, conventional computed tomography (CT) is still limited with low contrast due to energy-integrating detectors. Photon counting detector-based computed tomography (PCD-CT) shows the high vascular structure contrast in CT images using multi-energy information, thereby allowing accurate liver vessel segmentation. In this paper, we propose a deep learning-based liver vessel segmentation method which takes advantages of the multi-energy information from PCD-CT. We develop a 3D UNet to segment vascular structures within the liver from 4 multi-energy bin images which separates iodine contrast agents. The experimental results on simulated abdominal phantom dataset demonstrated that our proposed method for the PCD-CT outperformed the standard deep learning segmentation method with conventional CT in terms of dice overlap score and 3D vascular structure visualization.
Smaller pixels sizes of x-ray photon counting detectors (PCDs) have the two conflicting effects as follows. On one hand, smaller pixel sizes improve the ability to handle high count rates of x-rays (i.e., pulse pileups) because incident rates onto each PCD pixel decreases with decreasing the size. On the other hand, smaller pixel sizes increase chances of crosstalk and double-counting (or n-tuple-counting in general) between neighboring pixels, because, while the same size of electron charge cloud generated by a photon is independent of PCD sizes, the charge cloud size relative to the PCD size increases with decreasing the PCD size. In addition, the following two aspects are practical configurations in actual PCD computed tomography systems: N×N-pixel binning and anti-scatter grids. When n-tuple-counting occurs and those data are binned/added during post-acquisition process, the variance of the data will be larger than its mean. The anti-scatter grids may eliminate or decrease the cross-talk and n-tuple-counting by blocking primary x-rays near pixel boundaries or for the width of one pixel entirely. In this study, we studied the effects of PCD pixel sizes, N×N-pixel binning, and pixel masking on soft tissue contrast visibility using newly developed Photon Counting Toolkit (PcTK version 3.2; https://pctk.jhu.edu).
We developed and reported an analytical model (version 2.1) of inter-pixel cross-talk of energy-sensitive photon counting detectors (PCDs) in 2016 [1]. Since the time, we have identified four problems that are inherent to the design of the model version 2.1. In this study, we have developed a new model (version 3.2; “PcTK” for Photon Counting Toolkit) based on a totally different design concept. The comparison with the previous model version 2.1 and a Monte Carlo (MC) simulation has shown that the new model version 3.2 addressed the four problems successfully. The workflow script for computed tomography (CT) image quality assessment has demonstrated the utility of the model and potential values to CT community. The software packages including the workflow script, built using Matlab 2016a, has been made available to academic researchers free of charge (PcTK; https://pctk.jhu.edu).
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
This study concerns how to model x-ray transmittance, exp ( -- ∫ μa(r, E) dr), of the object using a small number of energy-dependent bases, which plays an important role for estimating basis line-integrals in photon counting detector (PCD)-based computed tomography (CT). Recently, we found that low-order polynomials can model the smooth x-ray transmittance, i.e. object without contrast agents, with sufficient accuracy, and developed a computationally efficient three-step estimator. The algorithm estimates the polynomial coefficients in the first step, estimates the basis line-integrals in the second step, and corrects for bias in the third step. We showed that the three-step estimator was ~1,500 times faster than conventional maximum likelihood (ML) estimator while it provided comparable bias and noise. The three-step estimator was derived based on the modeling of x-ray transmittance; thus, the accurate modeling of x-ray transmittance is an important issue. For this purpose, we introduce a modeling of the x-ray transmittance via dictionary learning approach. We show that the relative modeling error of dictionary learning-based approach is smaller than that of the low-order polynomials.
Photon counting detector (PCD) provides spectral information for estimating basis line-integrals; however, the recorded spectrum is distorted from spectral response effect (SRE). One of the conventional approaches to compensate for the SRE is to incorporate the SRE model in the forward imaging process. For this purpose, we recently developed a three-step algorithm as a (~×1, 500) fast alternative to maximum likelihood (ML) estimator based on the modeling of x-ray transmittance, exp ( − ∫ µa(r, E)dr ) , with low-order polynomials. However, it is limited on the case when K-edge is absent due to the smoothness property of the low-order polynomials. In this paper, we propose a dictionary learning-based x-ray transmittance modeling to address this limitation. More specifically, we design a dictionary which consists of several energy-dependent bases to model an unknown x-ray transmittance by training the dictionary based on various known x-ray transmittance as a training data. We show that the number of bases in the dictionary can be as large as the number of energy bins and that the modeling error is relatively small considering a practical number of energy bins. Once the dictionary is trained, the three-step algorithm can be derived as follows: estimating the unknown coefficients of the dictionary, estimating basis line-integrals, and then correcting for a bias. We validate the proposed method with various simulation studies for K-edge imaging with gadolinium contrast agent, and show that both bias and computational time are substantially reduced compared to those of the ML estimator.
We have developed a digitally synthesized patient which we call “Zach” (Zero millisecond Adjustable
Clinical Heart) phantom, which allows for an access to the ground truth and assessment of image-based
cardiac functional analysis (CFA) using CT images with clinically realistic settings. The study using Zach
phantom revealed a major problem with image-based CFA: "False dyssynchrony." Even though the true
motion of wall segments is in synchrony, it may appear to be dyssynchrony with the reconstructed cardiac
CT images. It is attributed to how cardiac images are reconstructed and how wall locations are updated
over cardiac phases. The presence and the degree of false dyssynchrony may vary from scan-to-scan,
which could degrade the accuracy and the repeatability (or precision) of image-based CT-CFA exams.
For myocardial perfusion CT exams, beam hardening (BH) artifacts may degrade the accuracy of myocardial perfusion defect detection. Meanwhile, cardiac motion may make BH process inconsistent, which makes conventional BH correction (BHC) methods ineffective. The aims of this study were to assess the severity of BH artifacts and motion artifacts and propose a projection-based iterative BHC method which has a potential to handle the motion-induced inconsistency better than conventional methods. In this study, four sets of forward projection data were first acquired using both cylindrical phantoms and cardiac images as objects: (1) with monochromatic x-rays without motion; (2) with polychromatic x-rays without motion; (3) with monochromatic x-rays with motion; and (4) with polychromatic x-rays with motion. From each dataset, images were reconstructed using filtered back projection; for datasets 2 and 4, one of the following BHC methods was also performed: (A) no BHC; (B) BHC that concerns water only; and (C) BHC that takes both water and iodine into account, which is an iterative method we developed in this work. Biases of images were quantified by the mean absolute difference (MAD). The MAD of images with BH artifacts alone (dataset 2, without BHC) was comparable or larger than that of images with motion artifacts alone (dataset 3): In the study of cardiac image, BH artifacts account for over 80% of the total artifacts. The use of BHC was effective: with dataset 4, MAD values were 170 HU with no BHC, 54 HU with water BHC, and 42 HU with the proposed BHC. Qualitative improvements in image quality were also noticeable in reconstructed images.
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple
clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs
near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing.
The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are
correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk
and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and
produce noisy data; however, it is not computationally efficient.
In this study, we developed a new detector model and implemented into an efficient software simulator which uses a
Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following
effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total
absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with
fluorescence x-ray emission which leaves PCD completely; and (5) electric noise.
The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict
spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected
performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data
generation was very efficient
In this paper, we review joint sparse recovery based reconstruction approach for inverse scattering problems that can solve the nonlnear inverse scattering probleme without linearization or iterative Green's function update. The main idea is to exploit the common support conditions of anomalies during multiple illumination or current injections, after which unknown potential or field can be estimated using recursive integral equation relationship. Explicit derivation for electric impedance tomography and diffuse optical tomography are discussed.
The goal of this paper is to develop novel algorithms for inverse scattering problems such as EEG/MEG, microwave
imaging, and/or diffuse optical tomograpahy, and etc. One of the main contributions of this paper is
a class of novel non-iterative exact nonlinear inverse scattering theory for coherent source imaging and moving
targets. Specifically, the new algorithms guarantee the exact recovery under a very relaxed constraint on the
number of source and receivers, under which the conventional methods fail. Such breakthrough was possible
thanks to the recent theory of compressive MUSIC and its extension using support correction criterion, where
partial support are estimated using the conventional compressed sensing approaches, then the remaining supports
are estimated using a novel generalized MUSIC criterion. Numerical results using coherent sources in
EEG/MEG and dynamic targets confirm that the new algorithms outperform the conventional ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.