PurposeAlthough there are several options for improving the generalizability of learned models, a data instance-based approach is desirable when stable data acquisition conditions cannot be guaranteed. Despite the wide use of data transformation methods to reduce data discrepancies between different data domains, detailed analysis for explaining the performance of data transformation methods is lacking.ApproachThis study compares several data transformation methods in the tuberculosis detection task with multi-institutional chest x-ray (CXR) data. Five different data transformations, including normalization, standardization with and without lung masking, and multi-frequency-based (MFB) standardization with and without lung masking were implemented. A tuberculosis detection network was trained using a reference dataset, and the data from six other sites were used for the network performance comparison. To analyze data harmonization performance, we extracted radiomic features and calculated the Mahalanobis distance. We visualized the features with a dimensionality reduction technique. Through similar methods, deep features of the trained networks were also analyzed to examine the models’ responses to the data from various sites.ResultsFrom various numerical assessments, the MFB standardization with lung masking provided the highest network performance for the non-reference datasets. From the radiomic and deep feature analyses, the features of the multi-site CXRs after MFB with lung masking were found to be well homogenized to the reference data, whereas the others showed limited performance.ConclusionsConventional normalization and standardization showed suboptimal performance in minimizing feature differences among various sites. Our study emphasizes the strengths of MFB standardization with lung masking in terms of network performance and feature homogenization.
2D synthetic radiography image can be computed from quasi-3D volume image produced by digital tomosynthesis (DTS) module negating additional radiation exposure for a separate 2D X-ray imaging. In our earlier work, we have developed a prototype DTS system that is equipped with an array of carbon-nanotube (CNT) X-ray sources. In this work, we develop an algorithm for synthesizing 2D image from the DTS-reconstructed volume image in the source array-based DTS system. Since the system uses a 2D array type source, the image artifacts due to the out-of-plane structures manifest relatively uniformly in all directions in the image slice unlike typical tomosynthesis systems. We have developed a smooth-manifold-extraction (SME) based method, which has been used in the field of confocal microscopy, for 2D image synthesis. Unlike microscopy, high-density structures exist at varying depths in a human body. Therefore, the SME algorithm was modified to apply to our DTS system.
KEYWORDS: Sensors, Computed tomography, Monte Carlo methods, Image filtering, Data modeling, Data acquisition, Scattering, Quantitative analysis, Optical simulations, Nonlinear filtering
In a cone-beam CT system, the use of bowtie-filter may induce artifacts in the reconstructed images. Through a Monte-Carlo simulation study, we confirm that the bowtie filter causes spatially biased beam energy difference thereby creating beam-hardening artifacts. We also note that cupping artifacts in conjunction with the object scatter and additional beam-hardening may manifest. In this study, we propose a dual-domain network for reducing the bowtie-filter induced artifacts by addressing the origin of artifacts. In the projection domain, the network compensates for the filter induced beam-hardening effects. In the image domain, the network reduces the cupping artifacts that generally appear in cone-beam CT images. Also, transfer learning scheme was adopted in the projection domain network to reduce the total training costs and to increase utility in the practical cases while maintaining the robustness of the dual-domain network. Thus, the pre-trained projection domain network using simple elliptical cylinder phantoms was utilized. As a result, the proposed network shows denoised and enhanced soft-tissue contrast images with much reduced image artifacts. For comparison, a single image domain U-net was also implemented as an ablation study. The proposed dual-domain network outperforms, in terms of soft-tissue contrast and residual artifacts, a single domain network that does not physically consider the cause of artifacts.
While breast density is known as one of the critical risk factors of breast cancer, Digital breast tomosynthesis (DBT)-based diagnostic performance is known to have a strong dependence on breast density. As a potential solution to increase the diagnostic performance of DBT, we are investigating dual-energy DBT imaging techniques. We estimated partial path lengths of an x-ray through water, lipid, and protein from the measured dual-energy projection data and the object thickness information. We reconstructed material-selective DBT images for the material-decomposed projection. The feasibility of the proposed dual-energy DBT scheme has been demonstrated by using physical phantoms.
This work addresses equalization and thickness estimation of breast periphery in digital breast tomosynthesis (DBT). Breast compression in DBT would lead to a relatively uniform thickness at inner breast but not at the periphery. Proper peripheral enhancement or thickness correction is needed for diagnostic convenience and for accurate volumetric breast density estimation. Such correction methods have been developed albeit with several shortcomings. We present a thickness correction method based on a supervised learning scheme with a convolutional neural network (CNN), which is one of the widely-used deep learning structures, to improve the pixel value of the peripheral region. The network was successfully trained and showed a robust and satisfactory performance in our numerical phantom study.
KEYWORDS: Image restoration, Reconstruction algorithms, Digital breast tomosynthesis, Digital imaging, Image processing, 3D image processing, Medical imaging, Breast, Computed tomography, Digital x-ray imaging
In digital tomosynthesis, high-density object artifacts such as ripples and undershoots can show up in the reconstructed image in conjunction with a limited angle problem and may hinder an accurate diagnosis. In this study, we propose an iterative image reconstruction method for reducing such artifacts by use of a voting strategy with a data fidelity term that involves derivative data. It has been confirmed that the voting strategy can help reduce high-density object artifacts in the algebraic iterative reconstruction framework for tomosyntheis and more importantly shown that its contribution greatly improves when the derivative data term is jointly used in the cost function. For evaluation, the CIRS breast phantom and a forearm phantom with metal implants were scanned using a prototype digital breast tomosynthesis system and a chest digital tomosynthesis system, respectively.
Interior tomography that acquires truncated data of a specific interior region-of-interest (ROI) is an attractive option to low-dose imaging. However, image reconstruction from such measurement does not yield an accurate solution because of data insufficiency. There have been developed a host of approaches to getting an approximate useful solution including various weighting methods, iterative reconstruction methods, and methods with prior knowledge. In this study, we use a deep-neural-network, which has shown its potentials in various fields including medical imaging, to reconstruct interior tomographic images. We assumed an offset-detector geometry which has wide applications in cone-beam CT (CBCT) imaging for its extended field-of-view (FOV) in this work. We trained a network to synthesize ‘ramp-filtered’ data within the detector active area so that the corresponding ROI reconstruction would be truncation-artifact-free in the filteredbackprojection (FBP) reconstruction framework. We have compared the results with post- and pre-convolution weighting methods and shown outperformance of the neural network approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.