The purpose of this study was to test the generalizability of our sample-efficient lesion detection framework for biopsy-proven breast lesions detection on digital breast Tomosynthesis (DBT). We developed a sample-efficient breast lesion detection framework using a set of limited biopsied DBT lesions. Instead of using large in-house lesion dataset that only a few can access, we utilized non-biopsied false positive findings to augment the limited training set. We applied our framework on open-source single and multi-stage Convolutional Neural Network based object detectors to show the generalizability of our framework. Then, we combined different detector models using ensemble approach to further improve the detection performance. Using a challenge validation set, we achieved detection performance (a mean sensitivity of 0.84 FPs per DBT volume and sensitivity of 0.80 at 2 false positives per image) close to one of top-ranking algorithms in the DBT lesion detection challenge which augmented the training set with a large in-house mammogram dataset.
When image data changes due to routine imaging machine updates, the performance of previously-trained deep learning (DL)-algorithms can be degraded. To mitigate the potential for performance degradation, we introduced an image-domain transfer approach by using a conditional Generative Adversarial Network (CGAN) that can transform the images acquired after the update to match previous images. Some studies have proposed domain adaptation (DA) method for DL algorithms from the old system to the new system. However, we sought to investigate the suitability of the DA method for transferring images from the current system to that of the previous system to adapt existing DL-algorithm in the current system. We validated our domain transfer approach using 1,000 DBT patch volumes (500 lesions; 500 normal) with two distinct image qualities under a virtual clinical trial framework. We curated two DBT image sets of breast patches with two distinct image qualities using different image reconstruction settings (simulating two different systems; e.g., previous and current). We then divided our data into training, validation, and testing set with a ratio of 0.8:0.1:0.1. Using the training set, we developed two CGANs (normal and lesion) for image-domain transformation from current to previous systems. We fine-tuned a DenseNet121 network as a reference classifier for classifying lesions vs. normal DBT patch volumes using the training set from the previous system. We evaluated our domain-transfer method by testing the reference model on the test sets in three qualities: a) previous (SP), b) current (SC), and c) domain-transferred images (DTC2P). The performance of the reference model with an AUC of 1.0 on the previous images, was degraded to an AUC of 0.88 (SC vs SP: p < 0.005) on the current images, but restored its performance to an AUC of 0.97 on the DT-images (SC vs DTC2P: p < 0.005). This result demonstrated that our domain transfer approach effectively restores the reference model’s original performance on images with current quality.
The mammary parenchyma is a complex arrangement of tissues that can greatly vary among individuals, potentially masking cancers in breast screening images. In this work, we propose a Simplex-based method to simulate anatomical patterns and textures seen in digital breast tomosynthesis. Our approach involves selecting appropriate Simplex noise parameters to represent distinct categories of breast parenchyma with variable volumetric breast density (%VBD). We use volumetric coarse masks (70 × 60 × 50 mm3) to outline patches of both dense and adipose tissues. These masks serve as a foundation for volumetric and multi-scale Simplex-based noise distributions. The Simplex-based noise distributions are normalized and thresholded using gradient level sets selected to binarize specific Simplex frequencies. The Simplex frequencies are summed and binarized using post-hoc thresholds, resulting in patches of tissue tailored to represent anatomic-like structures seen in digital breast tomosynthesis (DBT) images. We simulate DBT projections and reconstructions of the patches of breast tissue following the acquisition geometry and exposure settings of a clinical tomosynthesis system. We calculate the power spectra and estimate the power-law exponent (β) using a sample of DBT reconstructions (n=500, equally stratified by four density classes). Our findings reveal an absolute β value of 3.0, indicative of the improvements achieved in both the performance and realism of the breast tissue simulation. In summary, our proposed Simplex-based method enhances realism and texture variations, ensuring the presence of anatomical and quantum noise at levels consistent with the image quality expected in breast screening exams.
The purpose of this study was to develop a loss function that can drive a given CNN to achieve high sensitivity (or recall) for identifying women with a high-risk of having breast cancer. The cross-entropy (CE) loss function is widely used to optimize a CNN for natural scene classification due to its stability. However, CE loss treats each class equally, thus, it may not be suitable to train the CNN to have high sensitivity performance. Therefore, we hypothesized that a loss function based on the Fβ-measure, the weighted harmonic mean of precision and recall, can improve the sensitivity of the resulting CNN model by giving more weight to the recall metric. To do so, we combined CE loss with the Fβ-measure to implement a task-oriented loss function for achieving high sensitivity performance. In this preliminary work, we used a screening mammogram dataset of 2000 scans (1000 recalled lesions;1000 normal). We extracted recalled lesion patches using radiologists’ annotations and normal patches from the center of the breast. We fine-tuned the DenseNet121 network using the image patch dataset with a data split ratio of 0.8:0.1:0.1 for training, validation, and testing. We conducted ROC analysis to evaluate the performance of our proposed model. In the test set, the model with the task-oriented loss function achieved an AUC of 0.90 compared to CE loss (AUC=0.88) alone. The ROC curve of the proposed loss function achieved (a sensitivity of 53% at 98% specificity level) higher sensitivity than the CE loss alone (a sensitivity of 41% at 98% specificity level) for a high specificity area.
KEYWORDS: Digital breast tomosynthesis, Detection and tracking algorithms, Performance modeling, Breast, Image segmentation, Algorithm development, Data modeling, Cancer
We report an improved algorithm for detecting biopsy-proven breast lesions on digital breast tomosynthesis (DBT) where the given positive samples in the training set were limited. Instead of using a large scale inhouse dataset, our original algorithm used false positive findings (FPs) from non-biopsied (actionable) images to tackle the problem of a limited number of trainable samples. In this study, we further improved our algorithm by fusing multiple weak lesion detection models by using an ensemble approach. We used cross-validation (CV) to develop multiple lesion detection models. We first constructed baseline detection algorithms by varying the depth levels (medium and large) of the convolutional layers in the YOLOv5 algorithm using biopsied samples. We detected actionable FPs in non-biopsied images using a medium baseline model. We fine-tuned the baseline algorithms using the identified actionable FPs and the biopsied samples. For lesion detection, we processed the DBT volume slice-by-slice, then combined the estimated lesion of each slice along the depth of the DBT volume using volumetric morphological closing. Using 5-fold CV, we developed different multi-depth detection models for each depth level. Finally, we developed an ensemble algorithm by combining CV models with different depth levels. Our new algorithm achieved a mean sensitivity of 0.84 per DBT volume in the independent validation set from the DBTex challenge set, close to one of the top performing algorithms utilizing large inhouse data. These results show that our ensemble approach on different CV models is useful for improving the performance of the lesion detection algorithms.
Surgical tools detection for intraoperative surgical navigation system is essential for better coordination among surgical team in operating room. Because Orthopaedic surgery (OS) differs from laparoscopic, due to a large variety of surgical instruments and techniques making its procedures complicated. Compared to usual object detection in natural images, OS video images are confounded by inhomogeneous illumination; it is hard to directly apply existing studies that are developed for others. Additionally, acquiring Orthopaedic surgery videos is difficult due to recording of surgery videos in restricted surgical environment. Therefore, we propose a deep learning (DL) approach for surgery tools detection in OS videos by integrating knowledge of diverse representative surgery and non-surgery images of tools into the model using transfer learning (TL) and data augmentation. The proposed method has been evaluated for five surgical tools using knee surgery images following 10-fold cross validation. It shows, proposed model (mAP 62.46%) outperforms over conventional model (mAP 60%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.