This study explored the morphological classification problem of the left atrial appendage. Given the diversity and unequal distribution of left atrial appendage categories, we propose a deep learning network based on attention mechanism. As the source of cardioembolic stroke and thromboembolism, occlusion of left atrial appendage is an important therapeutic approach. Therefore, accurate classify-cation of left atrial appendage morphology is crucial for the success of the surgery. By incorporating the attention mechanism, the model can effectively concentrate on image features, leading to improved classification performance and mitigating class imbalance concerns. Experimental results show that this method outperforms existing techniques in classifying left atrial appendage morphology, achieving an accuracy of 0.6584. It outperforms various classification networks and performs well in handling imbalanced data categories. The research findings have clinical significance for preoperative planning and postoperative recovery of left atrial appendage occlusion surgery.
KEYWORDS: Breast, 3D modeling, Breast cancer, Image segmentation, Tumor growth modeling, Ultrasonography, Mammography, Visualization, 3D image processing, Tumors, Ultrasound real time imaging
Breast cancer is the most common form of invasive cancer in women. In recent years, it has become standard practise to perform breast mass evaluations using ultrasound (US) imaging. US can accurately distinguish between malignant and benign breast masses when used by skilled radiologists, as compared to other medical imaging modalities such as MRI. Human domain knowledge is difficult to incorporate into the diagnosis of breast tumours because it differs greatly from person to person in terms of shape, border, curve, intensity, and other commonly used medical priors. A deep learning model that incorporates visual saliency can now be used to segment breast tumours in ultrasound images. Radiologists use the term "visual saliency," which refers to areas of an image that are more likely to be noticed. Features that prioritise spatial regions with high saliency levels are learned using the proposed method. According to validation results, tumours are more accurately identified in models that include attention layers than those without them. The salient attention model has the potential to improve medical image analysis accuracy and robustness by allowing deep learning architectures to incorporate task-specific knowledge. AUC-ROC plots show that our new model is more accurate in terms of IOU and AUC-ROC scores, dice score, precision, recall, and IOU.
Heart segmentation is challenging due to the poor image contrast of heart in the CT images. Since manual segmentation of the heart is tedious and time-consuming, we propose an attention-based Convolution Neural Network (CNN) for heart segmentation. First, one-hot preprocessing is performed on the multi-tissue CT images. U-Net network with Attention-gate is then applied to obtain the heart region. We compared our method with several CNN methods in terms of dice coefficient. Results show that our method outperforms other methods for segmentation.
We propose a novel method for false positive reduction of pulmonary nodules using three-channel samples with different average thickness. A three-channel sample contains a patch centered on the candidate point as well as two patches at the k-th slice above and below the candidate point. Three-channel samples include rich spatial contextual information of pulmonary nodules, and can be trained with a low computational and storage requirement. The convolutional neural networks (CNNs) are constructed and optimized as the feature extractor and classifier of candidates in our study. A fusion method is proposed for fusing multiple prediction results of each candidate. Our method reports high sensitivities of 84.8% and 91.4% at 4 and 8 false positives per scan respectively on 888 CT scans released by the LUNA16 Challenge. The experimental results show that our method significantly reduces false positives in pulmonary nodule detection.
In this paper, we proposed a semi-automatic pulmonary nodule segmentation algorithm, which is operated within a region of interest for each nodule. It mainly includes two parts: the unsupervised training of auto-encoder and the supervised training of segmentation network. Applying an auto-encoder's unsupervised learning, we obtain a feature extractor that consists of its encoded part. Through adding some new neural network layers behind the feature extractor and do supervised learning on it, we get the final segmentation neural network. Compared with the traditional maximum two-dimensional entropy threshold segmentation algorithm, the dice correlation coefficient of this algorithm is 1% - 9% higher in 36 regions of interest segmentation experiments.
This work achieves a method based on modified extreme learning machine (ELM) with deep convolutional features to detect lung nodules automatically. Convolutional neural networks (CNNs) are employed to extract the features of lung nodules for classification. And then ELM is used to detect the lung nodules by combining the normalization and vote selection. In comparison with the traditional methods, it is shown that our method achieves a higher performance and it can be used as an effective tool for lung nodules computer aided diagnosis.
Our purposes are to develop a vertebra detection scheme for automated scan planning, which would assist radiological technologists in their routine work for the imaging of vertebrae. Because the orientations of vertebrae were various, and the Haar-like features were only employed to represent the subject on the vertical, horizontal, or diagonal directions, we rotated the CT scout image seven times to make the vertebrae roughly horizontal in least one of the rotated images. Then, we employed Adaboost learning algorithm to construct a strong classifier for the vertebra detection by use of Haar-like features, and combined the detection results with the overlapping region according to the number of times they were detected. Finally, most of the false positives were removed by use of the contextual relationship between them. The detection scheme was evaluated on a database with 76 CT scout image. Our detection scheme reported 1.65 false positives per image at a sensitivity of 94.3% for initial detection of vertebral candidates, and then the performance of detection was improved to 0.95 false positives per image at a sensitivity of 98.6% for the further steps of false positive reduction. The proposed scheme achieved a high performance for the detection of vertebrae with different orientations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.