In recent years, the amount of images to be read has increased due to the higher resolution of diagnostic imaging devices, and the burden on doctors has also increased. To solve this problem, the improvement of CAD (computer-aided diagnosis) performance has been studied. In this study, we developed an AI-based system for discriminating benign and malignant breast cancer tumors using transfer learning, one of the deep learning methods of AI, and analyzed what factors are necessary to improve the diagnostic accuracy of the system. Classification of benign and malignant diseases using diagnostic images showed an accuracy of 90%, which was equivalent to physician's discrimination, but the accuracy for medical checkup images was low at 85%, and image comparison revealed that this was due to noise and low contrast. We analyzed that these improvements are necessary for the construction of a more accurate CAD system for medical checkup images.
In recent years, convolutional neural networks (CNN) have found increasingly active application in the field of computer-aided diagnosis (CAD) research. Typically, general-use, high-performance detectors are designed using machine learning, the training of which is conducted by applying comprehensive sets of case images having various variations. In this study, we show that, when configuring CNN training data, dividing the data into multiple subsets and adjusting their ratios, instead of providing the data uniformly, has the potential for effective learning. We propose in this study a learning method by which CNN learning using these subsets is incrementally repeated. In this study, subsets of breast cancer mass learning data based on mass size and intensity were created. Using multiple data sets prepared for use in the evaluation of a CNN that had been subjected to learning, optimal ratios were considered and, based on this, performance evaluations using actual unknown data were conducted. Next, the ratios of evaluation data subsets having numerous detection errors were raised and relearning conducted. This process was repeated multiple times, as long as increases in the area under curve (AUC) were observed, thus enabling the design of a high-performance CNN. As a result of applying unknown data to this CNN, we found that it exhibited a higher AUC than a CNN to which learning data was simply provided comprehensively, demonstrating the effectiveness of the proposed learning method.
To compensate for an insufficiency of the case images needed in the development of CAD (Computer-Aided Diagnosis), work is underway to create artificial case images by embedding tumors and other such lesions into lesion-free images. Previously, the authors have demonstrated the effectiveness of creating artificial case images for hepatic and breast tumors and utilizing them in CAD development. Thus far, however, when training data comprising 50% or more artificial cases is used in CAD development, the resulting discrimination performance on test data has tended to be somewhat inferior compared to when training data comprises only actual cases.
With the objectives of applying artificial case images to a greater range of sites and of using exclusively artificial cases to develop a high-performance discriminator, in this study, effectiveness verification was conducted that focused on breast cancer calcifications as a new target. Because the characteristics of calcification shadows differ substantially from those of the hepatic and breast cancer tumor shadows studied thus far, a new artificial image creation technique was developed. Artificial cases created using this technique were applied to CAD development. As a result, a discriminator trained with 100% artificial cases obtained detection performance equal to that of a discriminator trained with entirely actual cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.