We introduce the enhanced Fourier Imager Network (eFIN), an end-to-end deep neural network that synergistically integrates physics-based propagation models with data-driven learning for highly generalizable hologram reconstruction. eFIN overcomes a key limitation of existing methods by performing seamless autofocusing across a large axial range without requiring a priori knowledge of sample-to-sensor distances. Moreover, eFIN incorporates a physics-informed sub-network that accurately infers unknown axial distances through an innovative loss function. eFIN can also achieve a three-fold pixel super-resolution, increasing the space-bandwidth product by nine-fold and enabling substantial acceleration of image acquisition and processing workflows with a negligible performance penalty.
KEYWORDS: Holography, Physics, Machine learning, Education and training, Deep learning, Data modeling, Biological samples, Biological imaging, Statistical modeling, Imaging systems
We present GedankenNet, a self-supervised learning framework designed to eliminate reliance on experimental training data for holographic image reconstruction and phase retrieval. Analogous to thought (Gedanken) experiments in physics, the training of GedankenNet is guided by the consistency of physical laws governing holography without any experimental data or prior knowledge regarding the samples. When blindly tested on experimental data of various biological samples, GedankenNet performed very well and outperformed existing supervised models on external generalization. We further showed the robustness of GedankenNet to perturbations in the imaging hardware, including unknown changes in the imaging distance, pixel size and illumination wavelength.
We present subwavelength imaging of amplitude- and phase-encoded objects based on a solid-immersion diffractive processor designed through deep learning. Subwavelength features from the objects are resolved by the collaboration between a jointly-optimized diffractive encoder and decoder pair. We experimentally demonstrated the subwavelength-imaging performance of solid immersion diffractive processors using terahertz radiation and achieved all-optical reconstruction of subwavelength phase features of objects (with linewidths of ~λ/3.4, where λ is the wavelength) by transforming them into magnified intensity images at the output field-of-view. Solid-immersion diffractive processors would provide cost-effective and compact solutions for applications in bioimaging, sensing, and material inspection, among others.
We present an all-optical image denoiser based on spatially-engineered diffractive layers. Following a one-time training process using a computer, this analog processor composed of fabricated passive layers achieves real-time image denoising by processing input images at the speed of light and synthesizing the denoised results within its output field-of-view, completely bypassing digital processing. Remarkably, these designs achieve high output diffraction efficiencies of up to 40%, while maintaining excellent denoising performance. The effectiveness of this diffractive image denoiser was experimentally validated at the terahertz spectrum, successfully removing salt-only noise from intensity images using a 3D-fabricated denoiser that axially spans <250 wavelengths.
We demonstrate a simple yet highly effective uncertainty quantification method for neural networks solving inverse imaging problems. We built forward-backward cycles utilizing the physical forward model and the trained network, derived the relationship of cycle consistency with respect to the robustness, uncertainty and bias of network inference, and obtained uncertainty estimators through regression analysis. An XGBoost classifier based on the uncertainty estimators was trained for out-of-distribution detection using artificial noise-injected images, and it successfully generalized to unseen real-world distribution shifts. Our method was validated on out-of-distribution detection in image deblurring and image super-resolution tasks, outperforming other deep neural network-based models.
We introduce GedankenNet, a self-supervised learning model for hologram reconstruction. During its training, GedankenNet leveraged a physics-consistency loss informed by the physical forward model of the imaging process, and simulated holograms generated from artificial random images with no correspondence to real-world samples. After this experimental-data-free training based on “Gedanken Experiments”, GedankenNet successfully generalized to experimental holograms on its first exposure to real-world experimental data, reconstructing complex fields of various samples. This self-supervised learning framework based on a physics-consistency loss and Gedanken experiments represents a significant step toward developing generalizable, robust and physics-driven AI models in computational microscopy and imaging.
We demonstrate a deep learning-based framework, called Fourier Imager Network (FIN), which achieves unparalleled generalization in end-to-end phase-recovery and hologram reconstruction. We used Fourier transform modules in FIN architecture, which process the spatial frequencies of the input images in a global receptive field and bring strong regularization and robustness to the hologram reconstruction task. We validated FIN by training it on human lung tissue samples and blindly testing it on human prostate, salivary gland, and Pap smear samples. FIN exhibits superior internal and external generalization compared with existing hologram reconstruction models, also achieving a ~50-fold acceleration in image inference speed.
We report a recurrent neural network (RNN)-based cross-modality image inference framework, termed Recurrent-MZ+, that explicitly incorporates two or three 2D fluorescence images, acquired at different axial planes, to rapidly reconstruct fluorescence images at arbitrary axial positions within the sample volume, matching the 3D image of the same sample acquired with a confocal scanning microscope. We demonstrated the efficacy of Recurrent-MZ+ on transgenic C. Elegans samples; using 3 wide-field fluorescence images as input, the reconstructed sample volume by Recurrent-MZ+ mitigates the deformations caused by the anisotropic point-spread-function of wide-field microscopy, and matches the ground truth confocal image stack of the sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.