Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available at https://tinyurl.com/wncfxn9m.
Microglial cell proliferation in neural tissue (neuroinflammation) occurs during infections, neurological disease, neurotoxicity, and other conditions. In basic science and clinical studies, quantification of microglial proliferation requires extensive manual counting (cell clicking) by trained experts (∼ 2 hours per case). Previous efforts to automate this process have focused on stereology-based estimation of global cell number using deep learning (DL)- based segmentation of immunostained microglial cells at high magnification. To further improve on throughput efficiency, we propose a novel approach using snapshot ensembles of convolutional neural networks (CNN) with training using local images, i.e., low (20x) magnification, to predict high or low microglial proliferation at the global level. An expert uses stereology to quantify the global microglia cell number at high magnification, applies a label of high or low proliferation at the animal (mouse) level, then assigns this global label to each 20x image as ground truth for training a CNN to predict global proliferation. To test accuracy, cross validation with six mouse brains from each class for training and one each for testing was done. The ensemble predictions were averaged, and the test brain was assigned a label based on the predicted class of the majority of images from that brain. The ensemble accurately classified proliferation in 11 of 14 brains (∼ 80%) in less than a minute per case, without cell-level segmentation or manual stereology at high magnification. This approach shows, for the first time, that training a DL model with local images can efficiently predict microglial cell proliferation at the global level. The dataset used in this work is publicly available at: tinyurl.com/20xData-USF-SRC.
Accurate pressure ulcer measurement is critical in assessing the effectiveness of treatment. However, the traditional measuring process is subjective. Each health care provider may measure the same wound differently, especially related to the depth of the wound. Even the same health care provider may obtain inconsistent measurements when measuring the same wound at multiple times. Also, the measuring process requires frequent contact with the wound, which increases risk of contamination or infection and can be uncomfortable for the patient. This manuscript describes a new automatic pressure ulcer monitoring system (PrUMS), which uses a tablet connected to a 3D scanner, to provide an objective, consistent, noncontact measurement method. We combine color segmentation on 2D images and 3D surface gradients to automatically segment the wound region for advanced wound measurements. To demonstrate the system, two pressure ulcers on a mannequin are measured with PrUMS; ground-truth is provided by a clinically trained wound care nurse. The results of PrUMS 2D measurement (length and width) are within 1 mm average error and 2 mm standard deviation; the average error for the depth measurement is 2 mm and the standard deviation is 2 mm. PrUMS is tested on a small pilot dataset of 8 patients: the average errors are 3 mm, 3 mm, and 4 mm in length, width, and depth, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.