PurposeThe survival rate of breast cancer for women in low- and middle-income countries is poor compared with that in high-income countries. Point-of-care ultrasound (POCUS) combined with deep learning could potentially be a suitable solution enabling early detection of breast cancer. We aim to improve a classification network dedicated to classifying POCUS images by comparing different techniques for increasing the amount of training data.ApproachTwo data sets consisting of breast tissue images were collected, one captured with POCUS and another with standard ultrasound (US). The data sets were expanded by using different techniques, including augmentation, histogram matching, histogram equalization, and cycle-consistent adversarial networks (CycleGANs). A classification network was trained on different combinations of the original and expanded data sets. Different types of augmentation were investigated and two different CycleGAN approaches were implemented.ResultsAlmost all methods for expanding the data sets significantly improved the classification results compared with solely using POCUS images during the training of the classification network. When training the classification network on POCUS and CycleGAN-generated POCUS images, it was possible to achieve an area under the receiver operating characteristic curve of 95.3% (95% confidence interval 93.4% to 97.0%).ConclusionsApplying augmentation during training showed to be important and increased the performance of the classification network. Adding more data also increased the performance, but using standard US images or CycleGAN-generated POCUS images gave similar results.
Early detection of breast cancer is important to reduce morbidity and mortality. Access to breast imaging is limited in low- and middle-income countries compared to high-income countries. This contributes to advancestage breast cancer presentation with poor survival. Pocket-sized portable ultrasound device, also known as point-of-care ultrasound (POCUS), aided by decision support using deep learning-based algorithms for lesion classification could be a cost-effective way to enable access to breast imaging in low-resource settings. A previous study, where using convolutional neural networks (CNN) to classify breast cancer in conventional ultrasound (US) images, showed promising results. The aim of the present study is to classify POCUS breast images. A POCUS data set containing 1100 breast images was collected. To increase the size of the data set, a CycleConsistent Adversarial Network (CycleGAN) was trained on US images to generate synthetic POCUS images. A CNN was implemented, trained, validated and tested on POCUS images. To improve performance, the CNN was trained with different combinations of data consisting of POCUS images, US images, CycleGAN-generated POCUS images and spatial augmentation. The best result was achieved by a CNN trained on a combination of POCUS images and CycleGAN-generated POCUS images and augmentation. This combination achieved a 95% confidence interval for AUC between 93.5% – 96.6%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.