For computed tomography (CT) imaging, it is important that the imaging protocols be optimized so that the scan is performed at the lowest dose that yields diagnostic images in order to minimize patients’ exposure to ionizing radiation. To accomplish this, it is important to verify that image quality of the acquired scan is sufficient for the diagnostic task at hand. Since the image quality strongly depends on both the characteristics of the patient as well as the imager, both of which are highly variable, using simplistic parameters like noise to determine the quality threshold is challenging. In this work, we apply deep learning using convolutional neural network (CNN) to predict whether CT scans meet the minimal image quality threshold for diagnosis. The dataset consists of 74 cases of high resolution axial CT scans acquired for the diagnosis of interstitial lung disease. The quality of the images is rated by a radiologist. While the number of cases is relatively small for deep learning tasks, each case consists of more than 200 slices, comprising a total of 21,257 images. The deep learning involves fine-tuning of a pre-trained VGG19 network, which results in an accuracy of 0.76 (95% CI: 0.748 – 0.773) and an AUC of 0.78 (SE: 0.01). While the number of total images is relatively large, the result is still significantly limited by the small number of cases. Despite the limitation, this work demonstrates the potential for using deep learning to characterize the diagnostic quality of CT scans.
We evaluated the potential of deep learning in the assessment of breast cancer risk using convolutional neural networks (CNNs) fine-tuned on full-field digital mammographic (FFDM) images. This study included 456 clinical FFDM cases from two high-risk datasets: BRCA1/2 gene-mutation carriers (53 cases) and unilateral cancer patients (75 cases), and a low-risk dataset as the control group (328 cases). All FFDM images (12-bit quantization and 100 micron pixel) were acquired with a GE Senographe 2000D system and were retrospectively collected under an IRB-approved, HIPAA-compliant protocol. Regions of interest of 256x256 pixels were selected from the central breast region behind the nipple in the craniocaudal projection. VGG19 pre-trained on the ImageNet dataset was used to classify the images either as high-risk or as low-risk subjects. The last fully-connected layer of pre-trained VGG19 was fine-tuned on FFDM images for breast cancer risk assessment. Performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) in the task of distinguishing between high-risk and low-risk subjects. AUC values of 0.84 (SE=0.05) and 0.72 (SE=0.06) were obtained in the task of distinguishing between the BRCA1/2 gene-mutation carriers and low-risk women and between unilateral cancer patients and low-risk women, respectively. Deep learning with CNNs appears to be able to extract parenchymal characteristics directly from FFDMs which are relevant to the task of distinguishing between cancer risk populations, and therefore has potential to aid clinicians in assessing mammographic parenchymal patterns for cancer risk assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.