Compressed sensing (CS) image reconstruction in CT suffers from the drawbacks such as 1) appearance of staircase artifacts and 2) loss in image textures and smooth intensity changes. These drawbacks stem from the fact that CS is based on approximating the image by a piecewise-constant function. To overcome this drawback, we have already proposed a framework to improve image quality in CS using deep learning. In this framework, FBP reconstructed image and CS (TV or Nonlocal TV) reconstructed image are inputted to CNN with two input channels and single output channel, and a final reconstructed image is obtained by the output of CNN. Parameters (weight and bias) of CNN together with a regularization parameter of CS are estimated by minimizing an average least-squares loss function by using learning data, i.e. a set of triplet of degraded FBP reconstruction, CS reconstruction, and answer image. In this paper, this framework is extended to 3-D image reconstruction in helical cone-beam CT operated with lowdose scanning protocol. Parameters (weight and bias) of CNN together with a regularization parameter of CS are estimated by minimizing an average least-squares loss function by using learning data, i.e. a set of triplet of degraded FBP reconstruction, CS reconstruction, and answer image. In this paper, this framework was extended to 3-D image reconstruction in helical cone-beam CT operated with lowdose scanning protocol. The extension was done in the following way. First, we prepare N different 2-D denoising CNN (CNN1, CNN2, . . . , CNNN ) dependent on the slice position n. Each slice of the short-scan FDK reconstruction without denoising yi and with 3-D TV (or Nonlocal TV) denoising zi are inputted to CNNn with the closest slice index n, which yields a corresponding output image for each slice xi . The final reconstructed image is obtained by stacking every slice xi (i = 1, 2, . . . , I).
Recent development of compressed sensing (CS) and deep learning (DL) brought a significant progress in image reconstruction for sparse-view CT and low-dose CT. However, there still exist a strong demand in further improving image quality. We propose a new framework for image reconstruction in sparse-view CT and low-dose CT, which significantly outperforms CS and DL in terms of image quality. This advantage originates from combining CS and DL in a successful way as described below, thereby leading to compensating for each other’s weakness. The proposed framework is based on the following principle. First, CS image reconstruction using TV (or Nonlocal TV) regularization is performed with prespecified M different values of regularization parameters (β1, β2, ---,βM), which generates M reconstructed images (z1,z2, - --,zM) with varying degree of TV smoothing. Next, the TV images (z1,z2, ---,zM) together with a FBP reconstruction (no smoothing) y are inputted into CNN having M+1 input channels and single output channel. The final reconstructed image is obtained as the output of CNN. With respect to the learning of network, CNN parameters (weights and biases) are estimated by minimizing an MSE loss function using learning data, i.e. a set of M+1 input images and corresponding answer image. In our previous work [11], we have already proposed a similar framework for the case where the number of input TV image is one. However, we expect that increasing the number of input images as mentioned above will further improve image quality. In this work, we have investigated such a new extension. Intuitively, the proposed method is based on combining good parts in M+1 input images to synthesizing a higher quality image, and this synthesis is performed by using DL. We have performed a simulation study using a dataset of clinical abdominal CT images for 2-D low-dose CT and 2-D sparse-view CT. The result demonstrates that the proposed combined approach is able to significantly improve image quality compared to the case where CS or DL was used alone, both in terms of numerical evaluation (RMSE and SSIM) and visual evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.