Polarization-sensitive optical coherence tomography (PS-OCT) allows the visualization of biological tissue microstructure by measuring the pathlength difference, amplitude, and polarization of backscattered light. Speckle grains complicate the visualizations due to scattering structures in tissue smaller than the PS-OCT resolution. We developed an angular compounding system to reduce speckle by rotating samples and collecting tomograms at multiple imaging angles, without modifying PS-OCT hardware or optical pathways. Tomograms were acquired, aligned with affine transformations, and averaged. This method successfully reduced the speckle and improved visualization of intensity and birefringence images.
OCT speckle carries information on sub-cellular tissue structures, and speckle statistics have been shown to be potential biomarkers in tissue characterization for disease detection and monitoring. Current methods for estimating speckle parameters use simple methods in which speckle statistics are determined inside a fixed kernel, which makes them unsuitable in heterogeneous tissue and have a clear trade-off between accuracy and spatial resolution. These limitations make them unsuitable for automatically detecting spatially-resolved differences in cellular microstructure that occurs in a diseased tissue. To address this unmet need, we have developed an algorithm based on a probabilistic approach to automatically select kernels consisting of pixels that have a high probability of sharing the same speckle probability density function and use them to estimate spatially-resolved speckle parameters using likelihood-based estimation. Our proposed method enables new capabilities in producing speckle parametric images, providing information on spatial variability of speckle distribution throughout OCT volumes and additional information to structural OCT imaging.
Speckle reduction has been an active topic of interest in the Optical Coherence Tomography (OCT) community and several techniques have been developed ranging from hardware-based methods, conventional image-processing and deep-learning based methods. The main goal of speckle reduction is to improve the diagnostic utility of OCT images by enhancing the image quality, thereby enhancing the visual interpretation of anatomical structures. We have previously introduced a probabilistic despeckling method based on non-local means for OCT—Tomographic Non-local-means despeckling (TNode). We demonstrated that this method efficiently suppresses speckle contrast while preserving tissue structures with dimensions approaching the system resolution. Despite the merits of this method, it is computationally very expensive: processing a typical retinal OCT volume takes a few hours. A much faster version of TNode with close to real-time performance, while keeping with the open source nature of TNode, could find much greater use in the OCT community. Deep learning despeckling methods have been proposed in OCT, including variants of conditional Generative Adversarial Networks (cGAN) and convolutional neural networks CNN. However, most of these methods have used B-scan compounding as a ground truth, which presents significant limitations in terms of speckle reduced tomograms with preservation of resolution. In addition, all these methods have focused on speckle suppression of individual B-scans, and their performance on volumetric tomograms is unclear: the expectation is that three-dimensional manipulations of these processed tomograms (i.e., en face projections) will contain artifacts due to the B-scan-wise processing, disrupting the continuity of tissue structures along the slow-scan axis. In addition, speckle suppression based on individual B-scans cannot provide the neural network with information on volumetric structures in the training data, and thus is expected to perform poorly on small structures. Indeed, most deep-learning despeckling works have focused on image quality metrics based on demonstrating strong speckle suppression, rather than focusing on preservation of contrast and small tissue structures. To overcome these problems, we propose an entire workflow to enable the wide-spread use of deep-learning speckle suppression in OCT: the ground-truth is generated using volumetric TNode despeckling, and the neural network uses a new cGAN that receives OCT partial volumes as inputs to utilize the three-dimensional structural information for speckle reduction. Because of its reliance on TNode for generating ground-truth data, this hybrid deep-learning–TNode (DL-TNode) framework will be made available to the OCT community to enable easy training and implementation in a multitude of OCT systems without relying on specialty-acquired training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.