Applying computer-aided detection (CAD) generated quantitative image markers has demonstrated significant advantages than using subjectively qualitative assessment in supporting translational clinical research. However, although many advanced CAD schemes have been developed, due to heterogeneity of medical images, achieving high scientific rigor of “black-box” type CAD schemes trained using small datasets remains a big challenge. In order to support and facilitate research effort and progress of physician researchers using quantitative imaging markers, we investigated and tested an interactive approach by developing CAD schemes with interactive functions and visual-aid tools. Thus, unlike fully automated CAD schemes, our interactive CAD tools allow users to visually inspect image segmentation results and provide instruction to correct segmentation errors if needed. Based on users’ instruction, CAD scheme automatically correct segmentation errors, recompute image features and generate machine learning-based prediction scores. We have installed three interactive CAD tools in clinical imaging reading facilities to date, which support and facilitate oncologists to acquire image markers to predict progression-free survival of ovarian cancer patients undergoing angiogenesis chemotherapies, and neurologists to compute image markers and prediction scores to assess prognosis of patients diagnosed with aneurysmal subarachnoid hemorrhage and acute ischemic stroke. Using these ICAD tools, clinical researchers have conducted several translational clinical studies by analyzing several diverse study cohorts, which have resulted in publishing seven peer-reviewed papers in clinical journals in the last three years. Additionally, feedbacks from physician researchers also indicate their increased confidence in using new quantitative image markers and help medical imaging researchers further improve or optimize interactive CAD tools.
Computer-aided detection and/or diagnosis schemes typically include machine learning classifiers trained using either handcrafted features or deep learning model-generated automated features. The objective of this study is to investigate a new method to effectively select optimal feature vectors from an extremely large automated feature pool and the feasibility of improving the performance of a machine learning classifier trained using the fused handcrafted and automated feature sets. We assembled a retrospective image dataset involving 1,535 mammograms in which 740 and 795 images depict malignant and benign lesions, respectively. For each image, a region of interest (ROI) around the center of the lesion is extracted. First, 40 handcrafted features are computed. Two automated feature set are extracted from a VGG16 network pretrained using the ImageNet dataset. The first automated feature set is extracted using pseudo color images created by stacking the original image, a bilateral filtered image, and a histogram equalized image. The second automated feature set is created by stacking the original image in three channels. Two fused feature sets are then created by fusing the handcrafted feature set with each automated feature set, respectively. Five linear support vector machines are then trained using a 10- fold cross-validation method. The classification accuracy and AUC of the SVMs trained using the fused feature sets performs significantly better than using handcrafted or automated features alone (p<0.05). Study results demonstrate that handcrafted and automated features contain complimentary information so that fusion together create classifiers with improved performance in classifying breast lesions as malignant or benign.
Computer-aided detection and/or diagnosis (CAD) schemes typically include machine learning classifiers trained using handcrafted features. The objective of this study is to investigate the feasibility of identifying and applying a new quantitative imaging marker to predict survival of gastric cancer patients. A retrospective dataset including CT images of 403 patients is assembled. Among them, 162 patients have more than 5-year survival. A CAD scheme is applied to segment gastric tumors depicted in multiple CT image slices. After gray-level normalization of each segmented tumor region to reduce image value fluctuation, we used a special feature selection library of a publicly available Pyradiomics software to compute 103 features. To identify an optimal approach to predict patient survival, we investigate two logistic regression model (LRM) generated imaging markers. The first one fuses image features computed from one CT slice and the second one fuses the weighted average image features computed from multiple CT slices. Two LRMs are trained and tested using a leave-one-case-out cross-validation method. Using the LRM-generated prediction scores, receiving operating characteristics (ROC) curves are computed and the area under ROC curve (AUC) is used as index to evaluate performance in predicting patients’ survival. Study results show that the case prediction-based AUC values are 0.70 and 0.72 for two LRM-generated image markers fused with image features computed from a single CT slide and multiple CT slices, respectively. This study demonstrates that (1) radiomics features computed from CT images carry valuable discriminatory information to predict survival of gastric cancer patients and (2) fusion of quasi-3D image features yields higher prediction accuracy than using simple 2D image features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.