Computed tomography (CT) is commonly used for the characterization and tracking of abdominal muscle mass in surgical patients for both pre-surgical outcome predictions and post-surgical monitoring of response to therapy. In order to accurately track changes of abdominal muscle mass, radiologists must manually segment CT slices of patients, a time-consuming task with potential for variability. In this work, we combined a fully convolutional neural network (CNN) with high levels of preprocessing to improve segmentation quality. We utilized a CNN based approach to remove patients’ arms and fat from each slice and then applied a series of registrations with a diverse set of abdominal muscle segmentations to identify a best fit mask. Using this best fit mask, we were able to remove many parts of the abdominal cavity, such as the liver, kidneys, and intestines. This preprocessing was able to achieve a mean Dice similarity coefficient (DSC) of 0.53 on our validation set and 0.50 on our test set by only using traditional computer vision techniques and no artificial intelligence. The preprocessed images were then fed into a similar CNN previously presented in a hybrid computer vision-artificial intelligence approach and was able to achieve a mean DSC of 0.94 on testing data. The preprocessing and deep learning-based method is able to accurately segment and quantify abdominal muscle mass on CT images.
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol’s Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol’s Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.