Purpose: This work aims to automatically identify the fovea on 2-dimensional fundus-autofluorescence (FAFs) images in patients with age-related-macular-degeneration (AMD) using the definitions from 3-dimensional spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Segmenting the fovea, a highly specialized area of the retina, in vicinity of hypo-autofluorescence in FAF images will aid in objective evaluation of AMD based structural disease features with respect to distance from fovea. Methods: A semi-automated software was used to create fovea-annotations in volumetric SD-OCT images. Acquired FAF images for the same SD-OCT visits were registered to the enface SD-OCT projections, to create a pixel-to-pixel overlap between registered FAFs and SD-OCTs. A U-Net based segmentation network, trained using OCT-registered-FAFs and corresponding foveal-annotations from SD-OCTs, was used to automatically segment foveas from the registered 2D FAF images. Results: The dataset consisted of multimodal-images from AMD patients with 900 (80%) images used for training and 222 (20%) images used in the test-set. The mean euclidean-distance-error for the test-set w.r.t the OCT-determined-ground-truth was found to be 103.5±81.4 µm, and which improved to 83.4±57.9 µm with data-augmentation-based-training. Fovea-identification in FAF images with advanced-AMD disease consisting of geographic-atrophy (GA) test subset were compared between 3 sources and the OCTdetermined-ground-truth: (1) the U-Net algorithm (using the GA test subset (111.7±46.7 μm)), (2) readers at the Wisconsin-reading-center (165±77.5 μm) and a (3) retina-physician (169.9±109.4 μm). Conclusion: Our work demonstrates the potential of using 2D FAF images to predict foveal-locations, especially in visuallychallenging-scenarios where hypo-autofluorescent fovea is surrounded with advanced-disease that alters the normal autofluorescence patterns. The results demonstrate that the developed algorithm has clinically useful performance in segmenting the fovea in FAF images which will enable critical correlation with visual-acuity and the basis for referencing the standardized measures of features relative to the fovea – such as monitoring and tracking the growth of GA and other retina-disease related changes.
Purpose: This work aims to identify areas with sub-retinal-pigment-epithelium (sub-RPE) accumulations on 2- dimensional (2D) color-fundus-photographs (CFPs) in patients with age-related macular degeneration (AMD) using the definitions in spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Detecting and quantifying areas of RPE elevations (most notably drusen) in CFPs will aid in objective evaluation of AMDseverity-scores as well as patient selection and monitoring in clinical trials. Methods: A retinal-layersegmentation algorithm for SD-OCTs was used to automatically identify areas with RPE elevations and build the ground-truth 2D binary maps for training. CFP was registered to the enface projection images of SD-OCT to overlay OCT-defined drusen areas on CFP images. A 2D-UNet segmentation network was trained using bilateral stereo CFP pairs in a Siamese architecture that share OCT-defined drusen areas as ground-truth. Results: Dataset consists of AMD patients with 127 train and 23 test eyes. Dice-similarity-coefficient for the predictions on CFPs was found to be 0.70±0.13 (mean±std), and overall accuracy was 0.73. 89% of test eyes exhibited drusen area prediction error <1mm2 compared to reading-center measures. Conclusion: Our work demonstrates the potential of using 2D CFP images to predict areas of sub-RPE elevations as defined in 3D-SDOCT imaging. Qualitative evaluation of the mismatch between the two imaging modalities shows regions with complementary features in a subset of the cases making it challenging to achieve optimal segmentation. However, the results show clinically useful performance in CFPs that can be used to quantify accumulations in the sub-RPE space which are the key pathologic biomarkers of AMD relevant to patient selection and trial outcome measure designs.
The photoreceptor (PR) – retinal pigment epithelium (RPE) – choriocapillaris (CC) complex is an extremely important group of layers in the outer retina. We demonstrate resolution of the CC vascular network across the macula, as well as the methodology to extract and quantify structural metrics from all three layers from averaged AO-OCT volumes. In diseased eyes, small changes in CC structure may portend the initiation of disease and therefore the investigation of CC structural changes may aid early disease diagnosis for many diseases, both prevalent and rare, that begin in the outer retina.
Purpose: This work investigates a semi-supervised approach for automatic detection of hyperreflective foci (HRF) in spectral-domain optical coherence tomography (SD-OCT) imaging. Starting with a limited annotated data set containing HRFs, we aim to build a larger data set and then a more robust detection model. Methods: Faster RCNN model for object detection was trained in a semi-supervised manner whereby high confidence detections from the current iteration are added to the training set in subsequent iterations after manual verification. With each iteration the size of the training set is increased by including model detected additional cases. We expect the model to be more accurate and robust as the number of training iterations increase. We performed experiments in a data set consisting over 170,000 SD-OCT B scans. The models were tested in a data set consisting of 30 patients (3630 B scans). Results: Across iterations the model performance improved with final model yielding precision=0.56, recall=0.99, and F1-score=0.71. As the number of training example increases the model detects cases with more confidence. The high false positive rate is associated with additional detections that capture instances of elevated reflectivity which upon review were found to represent questionable cases rather than definitive HRFs due to confounding factors. Conclusion: We demonstrate that by starting with a small data set of HRFs we are able to search the occurrences of other HRFs in the data set in a semi-supervised fashion. This method provides an objective, time, and cost-effective alternative to laborious manual inspection of B-scans for HRF occurrences.
KEYWORDS: Image segmentation, 3D modeling, Retina, Image processing algorithms and systems, Detection and tracking algorithms, Signal to noise ratio, 3D image processing, Image contrast enhancement, Medical image reconstruction, Medical image processing
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) is a much utilized imaging modality in retina clinics to inspect the integrity of retinal layers in patients with age related macular degeneration. Spectralis and Cirrus are two of the most widely used SD-OCT vendors. Due to the stark difference in intensities and signal to noise ratio’s between the images captured by the two instruments, a model trained on images from one instrument performs poorly on the images of the other instrument. Methods: In this work, we explore the performance of an algorithm trained on images obtained from the Heidelberg Spectralis device on Cirrus images. Utilizing a dataset containing Heidelberg images and Cirrus images, we address the problem of accurately segmenting images on one domain with an algorithm developed on another domain. In our approach we use unpaired CycleGAN based domain adaptation network to transform the Cirrus volumes to the Spectralis volumes, before using our trained segmentation network. Results: We show that the intensity distribution shifts towards the Spectralis domain when we domain adapt Cirrus images to Spectralis images. Our results show that the segmentation model performs significantly better on the domain translated volumes (Total Retinal Volume Error: 0.17±0.27mm3, RPEDC Volume Error: 0.047±0.05mm3) compared to the raw volumes (Total Retinal VolumeError: 0.26±0.36mm3, RPEDC Volume Error: 0.13±0.15mm3) from the Cirrus domain and that such domain adaptation approaches are feasible solutions. Conclusions: Both our qualitative and quantitative results show that CycleGAN domain adaptation network can be used as an efficient technique to perform unpaired domain adaptation between SD-OCT images generated from different devices. We show that a 3D segmentation model trained on Spectralis volume performs better on domain adapted Cirrus volumes, compared to raw Cirrus volumes.
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) images are a series of Bscans which capture the volume of the retina and reveal structural information. Diseases of the outer retina cause changes to the retinal layers which are evident on SD-OCT images, revealing disease etiology and risk factors for disease progression. Quantitative thickness information of the retina layers provide disease relevant data that reveal important aspects of disease pathogenesis. Manually labeling these layers is extremely laborious, time consuming and costly. Recently, deep learning algorithms have been used for automating the process of segmentation. While retinal volumes are inherently 3 dimensional, state-of-the-art segmentation approaches have been limited in their utilization of the 3 dimensional nature of the structural information. Methods: In this work, we train a 3D-UNet using 150 retinal volumes and test using 191 retinal volumes from a hold out test set (with AMD severity grade ranging from no disease through the intermediate stages to the advanced disease, and presence of geographic atrophy). The 3D deep features learned by the model captures spatial information simultaneously from all the three volumetric dimensions. Since unlike the ground truth, the output of 3D-UNet is not single pixel wide, we perform a column wise probabilistic maximum operation to obtain single pixel wide layers, for quantitative evaluations. Results: We compare our results to the publicly available OCT Explorer and deep learning based 2D-UNet algorithms and observe a low error within 3.11 pixels with respect to the ground truth locations (for some of the most challenging or advanced stage AMD eyes with AMD severity score: 9 and 10). Conclusion: Our results show that both qualitatively and quantitatively there is a significant advantage of extracting and utilizing 3D features over the traditionally used OCT Explorer or 2D-UNet.
Adaptive optics (AO) retinal imaging has enabled the visualization of cellular-level changes in the living human eye. However, imaging tissue-level lesions with such high resolution introduces unique challenges. At a fine spatial scale, intralesion features can resemble cells, effectively serving as camouflage and making it difficult to delineate the boundary of lesions. The size discrepancy between the tissue-level lesions and retinal cells is also highly variable, ranging from a difference of several-fold to greater than an order-of-magnitude. Here, we introduce a hybrid-transformer based on the combination of a convolutional LinkNet and a fully axial attention transformer network to consider both local and global image features, which excels at identifying tissue-level lesions within a cellular landscape. After training the hybrid transformer on 489 manually-annotated AO images, accurate lesion segmentation was achieved on a separate test dataset consisting of 75 AO images for validation. The segmentation accuracy achieved using the hybrid transformer was superior to the use of convolutional neural networks alone (U-Net and LinkNet) or transformer-based networks alone (AxialDeepLab and Medical Transformer) (p<0.05). These experimental results demonstrate that the combination of convolution and transformer networks are an efficient way to utilize both local and global image features for the purpose of lesion segmentation in medical imaging and may be important for computer-aided diagnosis that relies on accurate lesion segmentation.
KEYWORDS: Optical coherence tomography, In vivo imaging, Adaptive optics, Adaptive optics optical coherence tomography, Retinal scanning, Image quality, Human vision and color perception, Clinical trials
The development and application of adaptive optics (AO) in retinal imaging have enabled visualization of a plethora of retinal cells and structures. However, major hurdles exist for translating these achievements to the widely-available clinical devices for broad clinical applications. Here, by configuring a research grade AO – optical coherence tomography (AO-OCT) system to simulate a clinical OCT device, we provide evidence that clinical OCT systems have the potential to resolve individual ganglion cell layer somas and determine that a lateral sampling of ~1.5 µm/pixel is required to accurately quantify soma density and size.
Retinal toxicity among long-term users of Hydroxychloroquine manifests with loss in the Ellipsoid zone (EZ) detectable on SD-OCT imaging. This work reports an automatic deep-learning algorithm to detect and segment EZ loss in SD-OCT. The proposed model predicts EZ loss map, in a dual network architecture that operates in parallel combining scan-by-scan detections in horizontal and vertical directions. The combined model demonstrated the best overall performance with F1 score = 0.91 ± 0.07, improving the performance compared to individual models. Automatic methods for EZ loss detection could provide a useful tool to facilitate screening of patients for evidence of toxicity.
Spatial alignment of longitudinally acquired retinal images is necessary for the development of image-based metrics identifying structural features associated with disease progression in diseases such as age-related macular degeneration (AMD). This work develops and evaluates a feature-based registration framework for accurate and robust registration of retinal images. Methods: Two feature-based registration approaches were investigated for the alignment of fundus auto-fluorescence images. The first method used conventional SIFT local feature descriptors to solve for the geometric transformation between two corresponding point-sets. The second method used a deep-learning approach with a network architecture mirroring the feature localization and matching process of the conventional method. The methods were validated using clinical images acquired in an ongoing longitudinal study of AMD and consisted of 75 patients (145 eyes) with 4 year follow up imaging. In the deep-learning method, 113 image pairs were used during training (with the ground truth provided from manually verified SIFT feature registration) and 20 image pairs were used for testing (with the ground truth provided from manual landmark annotation). Results: Conventional method using SIFT features demonstrated target registration error (mean ± std) = 0.05 ± 0.04 mm, substantially improving the alignment from the initialization with error = 0.34 ± 0.22 mm. The deep-learning method, on the other hand, exhibited error = 0.10 ± 0.07 mm. While both methods improved upon the initial misalignment, SIFT method showed the best overall geometric accuracy. However, deep-learning method exhibited robust performance (error = 0.15 ± 0.09 mm) in the 7% of cases that SIFT method exhibited failures (error = 3.71 ± 6.36 mm). Conclusion: While both methods demonstrated successful performance, SIFT method exhibited the best overall geometric accuracy whereas deep-learning method was superior in terms of robustness. Achieving accurate and robust registration is essential in large-scale studies investigating factors underlying retinal disease progression such as in AMD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.