This study investigates the effectiveness of artificial intelligence (AI)-based models in detecting and quantifying Breast Arterial Calcification (BAC) from mammograms, a potential indicator of cardiovascular disease. Using two distinct subsets from the OPTIMAM database, an enriched dataset of 1683 images previously confirmed by expert readers to have lesions with non-BAC calcifications, and a ‘normal’ dataset with 1401 representative screening mammography exams, selected among those that were negative on both the included and prior exams. Manual annotation of the calcification data by four readers established ground truth. Two novel BAC detection and quantification models were tested, a baseline and enhanced model. The models exhibited promising results, particularly in terms of a low false positive rate for the enhanced model at 0.6%, but also highlighted the need for improvements to achieve a balance between sensitivity (51.0%) and specificity (99.4%). Notably, 62% of the findings missed by the enhanced model were classified as single-wall BAC, which is usually scored as minimal based on a lower association with cardiovascular disease. Future work is required to establish the association of the model performance with clinical outcomes. The study also examined the relationship between BAC prevalence and certain patient characteristics such as age and Volpara® Density Grade (VDG) in the ‘normal’ screening dataset. Significant correlations were found between BAC volume and patient age, and between BAC prevalence and VDG, which aligns with existing literature. The findings emphasize the potential of AI in improving the consistency of BAC detection with objective quantitative measures, as well as the developed model’s ability to predict the prevalence of BAC in relation to age.
Purpose: To introduce a novel technique for pretraining deep neural networks on mammographic images, where the network learns to predict multiple metadata attributes and simultaneously to match images from the same patient and study. Further to demonstrate how this network can be used to produce explainable predictions. Methods: We trained a neural network on a dataset of 85,558 raw mammographic images and seven types of metadata, using a combination of supervised and self-supervised learning techniques. We evaluated the performance of our model on a dataset of 4,678 raw mammographic images using classification accuracy and correlation. We also designed an ablation study to demonstrate how the model can produce explainable predictions. Results: The model learned to predict all but one of the seven metadata fields with classification accuracy ranging from 78-99% on the validation dataset. The model was able to predict which images were from the same patient with over 93% accuracy on a balanced dataset. Using a simple X-ray system classifier built on top of the first model, representations learned on the initial X-ray system classification task showed by far the largest effect size on ablation, illustrating a method for producing explainable predictions. Conclusions: It is possible to train a neural network to predict several kinds of mammogram metadata simultaneously. The representations learned by the model for these tasks can be summed to produce an image representation that captures features unique to a patient and study. With such a model, ablation offers a promising method to enhance the explainability of deep learning predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.