As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography
(CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study,
we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without
increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution
patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were
used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled
images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations,
which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of
45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The
image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The
differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant.
Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas
conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there
were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a
robust approach for application of up-sampling CT images and yields substantial high image quality of extended images
in CT.
Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image
by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more
accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional
neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in
computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified
images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247
chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN
was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We
compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and
bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio
(PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those
of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge
than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the
SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that
the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.