We present a cross-modality super-resolution microscopy method based on the generative adversarial network (GAN) framework. Using a trained convolutional neural network, our method takes a low-resolution image acquired with one microscopic imaging modality, and super-resolves it to match the resolution of the image of the same sample captured with another higher resolution microscopy modality. This cross-modality super-resolution method is purely data-driven, i.e., it does not rely on any knowledge of the image formation model, or the point-spread-function. First, we demonstrated the success of our method by super-resolving wide-field fluorescence microscopy images captured with a low-numerical aperture (NA=0.4) objective to match the resolution of images captured with a higher NA objective (NA=0.75). Next, we applied our method to confocal microscopy to super-resolve closely spaced nano-particles and Histone3 sites within HeLa cell nuclei, matching the resolution of stimulated emission depletion (STED) microscopy images of the same samples. Our method was also verified by super-resolving the diffraction-limited total internal reflection fluorescence (TIRF) microscopy images, matching the resolution of TIRF-SIM (structured illumination microscopy) images of the same samples, which revealed endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo. The super-resolved object features in the network output show strong agreement with the ground truth SIM reconstructions, which were synthesized using 9 diffraction-limited TIRF images, each with structured illumination. Other than resolution enhancement, our method also offers an extended depth-of-field and improved signal-to-noise ratio (SNR) in the network inferred images compared against the corresponding ground truth images.
KEYWORDS: Microscopy, Super resolution, Luminescence, Image processing, Image resolution, Confocal microscopy, Neural networks, Gallium nitride, Diffraction, Signal to noise ratio
We present a deep learning-based framework for super-resolution image transformations across multiple fluorescence microscopy modalities. By training a neural network using a generative adversarial network (GAN), a single low-resolution image is transformed into a high-resolution image that surpasses the diffraction limit. The deep network’s output also demonstrates improved signal-to-noise ratio and extended depth-of-field. This framework is solely data-driven which means that it does not rely on any physical models of the imaging formation process, and instead learns a statistical transformation from the training image datasets. The inference process is non-iterative and does not require sweeping over parameters to achieve optimal results, in contrast to state-of-the-art deconvolution methods. The success of this framework is demonstrated by super-resolving wide-field images captured with low-numerical aperture objective-lenses to match the resolution of images captured with high-numerical aperture objectives. In another example, we demonstrate the transformation of confocal microscopy images into images that match the performance of stimulated emission depletion (STED) microscopy, by super-resolving the distributions of Histone 3 sites within cell nuclei. We also applied this framework to total-internal-reflection fluorescence (TIRF) microscopy and super-resolved TIRF images to match the resolution of TIRF-based structured illumination microscopy (TIRF-SIM). Our super-resolved TIRF images/movies reveal endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo, providing a very good match to TIRF-SIM images/movies of the same samples. Our experimental results demonstrate that the presented data-driven super resolution approach generalizes to new types of images and super-resolves objects that were not present in the training stage.
To digitally decode phase and amplitude images of a sample from its hologram, auto-focusing and phase recovery steps are required, which are in general challenging to compute. Here, we demonstrate fast and robust autofocusing and phase recovery that are simultaneously performed using a deep convolutional neural network (CNN). This CNN is trained with pairs of randomly de-focused back-propagated holograms and their corresponding in-focus phase recovered images (used as ground truth). After its training, the CNN takes a single back-propagated hologram, and outputs an extended depth-of-field (DOF) complex-valued image, where all the objects or points-of-interest within the sample volume are autofocused and phase-recovered all in parallel. Compared to iterative image reconstruction or a CNN trained using only in-focus images, this new approach achieves >25-fold increase in image DOF and eliminates the need to autofocus individual points within the sample volume, thus improving the complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points within the sample volume, and m represents the autofocusing search space. We demonstrated the success of this approach by imaging various samples, including aerosols and human breast tissue sections. Our results highlight some unique capabilities of deep-learning based image reconstruction methods that are powered by data.
Mobile-phone based microscopy often uses 3D-printed opto-mechanical designs and inexpensive optical components that are not optimized for microscopic imaging of specimen. For example, the illumination source is often a battery-powered LED, which can create spectral distortions in the acquired image. Mechanical misalignments of the optical components and the sample holder as well as inexpensive lenses lead to spatial distortions at the microscale. Furthermore, mobile-phones are equipped with CMOS image sensors with a pixel size of ~1-2 µm, which results in an inferior signal-to-noise ratio compared to benchtop microscopes, which are typically equipped with much larger pixels, e.g., ~5-10 µm.
Here, we demonstrate a supervised learning framework, based on a deep convolutional neural network for substantial enhancement of a smartphone microscope image, by eliminating spectral aberrations, increasing the signal-to-noise ratio and improving the spatial resolution of the acquired images. Once trained, the deep neural network is fixed, and it rapidly outputs an image, matching the quality of a benchtop microscope image, in a feed-forward, non-iterative manner, without the need for any modeling of the aberrations in the mobile imaging system. This framework is demonstrated using pathology slides of thin tissues sections and blood smears, validating its superior performance even using highly-compressed images, suitable especially for telemedicine applications with restricted bandwidth and storage requirements. This deep learning-powered approach can be broadly applicable to various mobile microscopy systems that can be used for point-of-care medicine and global health related applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.