We present a fast virtual-staining framework for defocused autofluorescence images of unlabeled tissue, matching the performance of standard virtual-staining models using in-focus label-free images. For this, we introduced a virtual-autofocusing network to digitally refocus the defocused images. Subsequently, these refocused images were transformed into virtually-stained H&E images using a successive neural network. Using coarsely-focused autofluorescence images, with 4-fold fewer focus points and 2-fold lower focusing precision, we achieved equivalent virtual-staining performance to standard H&E virtual-staining networks that utilize finely-focused images, helping us decrease the total image acquisition time by ~32% and the autofocusing time by ~89% for each whole-slide image.
We demonstrate a reconfigurable diffractive deep neural network (termed R‑D2NN) with a single physical model performing a large set of unique permutation operations between an input and output field-of-view by rotating different layers within the diffractive network. Our study numerically demonstrated the efficacy of R‑D2NN by accurately approximating 256 distinct permutation matrices using 4 rotatable diffractive layers. We experimentally validated the proof-of-concept of reconfigurable diffractive networks using terahertz radiation and 3D-printed diffractive layers, achieving high concordance with numerical simulations. The reconfigurable design of R‑D2NN provides scalability with high computing speed and efficient use of materials within a single fabricated model.
We present a rapid, stain-free, and automated viral plaque assay utilizing deep learning and time-lapse holographic imaging, which can significantly reduce the time needed for plaque-forming unit (PFU) detection and entirely bypass the chemical staining and manual counting processes. Demonstrated with vesicular stomatitis virus (VSV), our system identified the first PFU events as early as 5 hours of incubation and detected >90% of PFUs with 100% specificity in <20 hours, saving >24 hours compared to the traditional viral plaque assays that take ≥48 hours. Furthermore, our method was proven to adapt seamlessly to new types of viruses by transfer learning.
The traditional histochemical staining of autopsy tissue samples usually suffers from staining artifacts due to autolysis caused by delayed fixation of cadaver tissues. Here, we introduce an autopsy virtual staining technique to digitally convert autofluorescence images of unlabeled autopsy tissue sections into their hematoxylin and eosin (H&E) stained counterparts through a trained neural network. This technique was demonstrated to effectively mitigate autolysis-induced artifacts inherent in histochemical staining, such as weak nuclear contrast and color fading in the cytoplasmic-extracellular matrix. As a rapid, reagent-efficient, and high-quality histological staining approach, the presented technique holds great potential for widespread application in the future.
We introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in immunohistochemically (IHC) stained breast cancer tissue images. Our deep learning-based method leverages pyramid sampling to analyze features across multiple scales from IHC-stained breast tissue images, managing the computational load effectively and addressing the challenges of HER2 expression heterogeneity by capturing detailed cellular features and broader tissue architecture. Upon application to 523 core images, the model achieved a classification accuracy of 85.47%, demonstrating the ability to counteract staining variability and tissue heterogeneity, which might improve the accuracy and timeliness of breast cancer treatment planning.
We demonstrate a simple yet highly effective uncertainty quantification method for neural networks solving inverse imaging problems. We built forward-backward cycles utilizing the physical forward model and the trained network, derived the relationship of cycle consistency with respect to the robustness, uncertainty and bias of network inference, and obtained uncertainty estimators through regression analysis. An XGBoost classifier based on the uncertainty estimators was trained for out-of-distribution detection using artificial noise-injected images, and it successfully generalized to unseen real-world distribution shifts. Our method was validated on out-of-distribution detection in image deblurring and image super-resolution tasks, outperforming other deep neural network-based models.
We report label-free, in vivo virtual histology of skin using reflectance confocal microscopy (RCM). We trained a deep neural network to transform in vivo RCM images of unstained skin into virtually stained H&E-like microscopic images with nuclear contrast. This framework successfully generalized to diverse skin conditions, e.g., normal skin, basal cell carcinoma, and melanocytic nevi, as well as distinct skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. This label-free in vivo skin virtual histology framework can be transformative for faster and more accurate diagnosis of malignant skin neoplasms, with the potential to significantly reduce unnecessary skin biopsies.
We present a virtual staining framework that can rapidly stain defocused autofluorescence images of label-free tissue, matching the performance of standard virtual staining models that use in-focus unlabeled images. We trained and blindly tested this deep learning-based framework using human lung tissue. Using coarsely-focused autofluorescence images acquired with 4× fewer focus points and 2× lower focusing precision, we achieved equivalent performance to the standard virtual staining that used finely-focused autofluorescence input images. We achieved a ~32% decrease in the total image acquisition time needed for virtual staining of a label-free whole-slide image, alongside a ~89% decrease in the autofocusing time.
We present a deep learning-based framework to virtually transfer images of H&E-stained tissue to other stain types using cascaded deep neural networks. This method, termed C-DNN, was trained in a cascaded manner: label-free autofluorescence images were fed to the first generator as input and transformed into H&E stained images. These virtually stained H&E images were then transformed into Periodic acid–Schiff (PAS) stain by the second generator. We trained and tested C-DNN on kidney needle-core biopsy tissue, and its output images showed better color accuracy and higher contrast on various histological features compared to other stain transfer models.
We present a stain-free, rapid, and automated viral plaque assay using deep learning and holography, which needs significantly less sample incubation time than traditional plaque assays. A portable and cost-effective lens-free imaging prototype was built to record the spatio-temporal features of the plaque-forming units (PFUs) during their growth, without the need for staining. Our system detected the first cell lysing events as early as 5 hours of incubation and achieved >90% PFU detection rate with 100% specificity in <20 hours, saving >24 hours compared to the traditional viral plaque assays that take ≥48 hours.
We present a computational mobile imaging device that captures holograms of aerosols through a virtual impactor, a flow-based device designed to detect aerosols. A differential detection scheme localizes all the flowing particles in air, and their auto-focused holograms are used to classify them using a trained neural network without any labels/stains. To test this cost-effective mobile device, we aerosolized different types of pollen (Bermuda, Elm, Oak, Pine, Sycamore, and Wheat) and achieved a blind testing classification accuracy of 92.91%. This cost-effective mobile system can be used as a long-term air quality monitor to automatically count/sense particulate matter and various allergens.
Reflectance confocal microscopy (RCM) can provide in vivo images of the skin with cellular-level resolution; however, RCM images are grayscale, lack nuclear features and have a low correlation with histology. We present a deep learning-based virtual staining method to perform non-invasive virtual histology of the skin based on in vivo, label-free RCM images. This virtual histology framework revealed successful inference for various skin conditions, such as basal cell carcinoma, also covering distinct skin layers, including epidermis and dermal-epidermal junction. This method can pave the way for faster and more accurate diagnosis of malignant skin neoplasms while reducing unnecessary biopsies.
Immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) is routinely performed on breast cancer cases to guide immunotherapies and help predict the prognosis of breast tumors. We present a label-free virtual HER2 staining method enabled by deep learning as an alternative digital staining method. Our blinded, quantitative analysis based on three board-certified breast pathologists revealed that evaluating HER2 scores based on virtually-stained HER2 whole slide images (WSIs) is as accurate as standard IHC-stained WSIs. This virtual HER2 staining can be extended to other IHC biomarkers to significantly improve disease diagnostics and prognostics.
We present a supervised learning approach to train a deep neural network which can transform images of H&E stained tissue sections into special stains (e.g., PAS, Jones silver stain and Masson’s Trichrome). We performed a diagnostic study using tissue sections from 58 subjects covering a variety of non-neoplastic kidney diseases to show that when the pathologists performed their diagnoses using the three virtually-created special stains in addition to H&E, a statistically significant diagnostic improvement was made over the use of H&E only. This virtual staining technique can be used to improve preliminary diagnoses while saving time and reducing costs.
KEYWORDS: Optical coherence tomography, Image restoration, Neural networks, 3D image reconstruction, Image quality, 3D image processing, Stereoscopy, Spectral resolution, Signal to noise ratio, Imaging systems
We report neural network-based rapid reconstruction of swept-source OCT (SS-OCT) images using undersampled spectral data. We trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using >3-fold undersampled spectral data per A-line, the trained neural network can blindly remove spatial aliasing artifacts due to spectral undersampling, presenting a very good match to the images reconstructed using the full spectral data. This method can be integrated with various swept-source or spectral domain OCT systems to potentially improve the 3D imaging speed without a sacrifice in resolution or signal-to-noise of the reconstructed images.
We virtually generate multiple histological stains through a single deep-neural-network, using at its input autofluorescence images of the unlabeled tissue alongside a user-defined digital-staining-matrix. By feeding this digital-staining-matrix to the network, the user indicates which stain to apply on each pixel or region-of-interest, enabling virtual blending of multiple stains according to a desired micro-structure map. We demonstrated this technique by applying combinations of different stains (H&E, Masson’s Trichrome and Jones silver stain) on blindly-tested, unlabeled tissue sections. This technology avoids the histochemical staining process and enables newly-generated stains and stain-combinations to be used for inspection of label-free tissue microstructure.
We present a method to generate multiple virtual stains on an image of label-free tissue using a single deep neural network according to a user-defined micro-structure map. The input to this network comes from two sources: (i) autofluorescence microscopy images of the unlabeled tissue, (ii) a user-defined digital-staining-matrix. This digital-staining-matrix indicates which stain is to be virtually-generated for each pixel, and can be used to create a micro-structured stain map, or virtually blend stains together. We experimentally validated this approach using blind-testing of label-free kidney tissue sections, and successfully generated combinations of H and E, Masson’s Trichome stain, and Jones silver stain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.