KEYWORDS: Image segmentation, Skin, Education and training, Transformers, Deep learning, Data modeling, Performance modeling, Medical imaging, Skin cancer, RGB color model
Accurately segmenting skin lesions from dermoscopy images is crucial for improving the quantitative analysis of skin cancer. However, segmenting melanoma automatically is difficult due to the significant variation in melanoma and unclear boundaries of the lesion areas. While Convolutional Neural Networks (CNNs) have made impressive progress in this area, most existing solutions need help to effectively capture global dependencies resulting from limited receptive fields. Recently, transformers have emerged as a promising tool for modeling global context by using a powerful global and local attention mechanism. In this paper, we investigated the effectiveness of various deep learning, including CNN-based and transform-based approaches, for the segmentation of skin lesions on dermoscopy images. We also studied and compared the performance of transfer learning algorithms developed based on well-established encoders such as Swin Transformer, Mix-Transformer, Vision Transformer, ResNet, VGG-16, and DenseNet. Our proposed approach involves training a neural network on polar transformations of the original dataset, with the polar origin set to the object’s center point. This simplifies the segmentation and localization tasks and reduces dimensionality, making it easier for the network to converge. The ISIC 2018 datasets containing 2,594 dermoscopy images with their ground truth segmentation masks was used in the evaluation of our approach for skin lesion segmentation tasks. This dataset was randomly split into 70%, 10%, and 20% groups for training, validation, and testing purposes. The experimental results showed that when we used polar transformations as a pre-processing step, the CNN-based and transform-based approaches generally improved the models efficiency across dataset.
Plasmodium malaria is a parasitic protozoan that causes malaria in humans. Computer aided detection of Plasmodium is a research area attracting great interest. In this paper, we study the performance of various machine learning and deep learning approaches for the detection of Plasmodium on cell images from digital microscopy. We make use of a publicly available dataset composed of 27,558 cell images with equal instances of parasitized (contains Plasmodium) and uninfected (no Plasmodium) cells. We randomly split the dataset into groups of 80% and 20% for training and testing purposes, respectively. We apply color constancy and spatially resample all images to a particular size depending on the classification architecture implemented. We propose a fast Convolutional Neural Network (CNN) architecture for the classification of cell images. We also study and compare the performance of transfer learning algorithms developed based on well-established network architectures such as AlexNet, ResNet, VGG-16 and DenseNet. In addition, we study the performance of the bag-of-features model with Support Vector Machine for classification. The overall probability of a cell image comprising Plasmodium is determined based on the average of probabilities provided by all the CNN architectures implemented in this paper. Our proposed algorithm provided an overall accuracy of 96.7% on the testing dataset and area under the Receiver Operating Characteristic (ROC) curve value of 0.994 for 2756 parasitized cell images. This type of automated classification of cell images would enhance the workflow of microscopists and provide a valuable second opinion.
Conference Committee Involvement (3)
Applications of Machine Learning 2025
3 August 2025 | San Diego, California, United States
Applications of Machine Learning 2024
20 August 2024 | San Diego, California, United States
Applications of Machine Learning 2023
23 August 2023 | San Diego, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.