Open Access
24 November 2015 Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography
Maribel Hidalgo-Aguirre, Julian Gitelman, Mark R. Lesk, Santiago Costantino
Author Affiliations +
Abstract
Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.

1.

Introduction

Optical coherence tomography (OCT) is a powerful noninvasive imaging technique that has become a standard clinical tool in ophthalmology, providing a direct and accurate visualization of the retina and its layered structure.1 This information helps to detect and monitor a variety of retinal diseases such as age-related macular degeneration, diabetic retinopathy, and retinal detachments. OCT shows particular promise in detecting glaucoma, the second leading cause of blindness worldwide, which is a progressive neuropathy characterized by the thinning of retinal nerve fiber layer and changes in the structural and functional integrity of the optic nerve head (ONH).2

It has been recently demonstrated that tissue around the ONH is deformed during cardiac pulsations,3 and that such movement is different in healthy and glaucoma patients.4 As the vast majority of research projects based on OCT rely on static images made up of several averaged frames, these findings underline the relevance of comprehensive dynamic studies for which the development of novel imaging strategies as well as analysis methods is also required.

An essential part of OCT data processing is segmentation, and one of its most challenging problems is the design of a system that works properly in clinical applications, i.e., the development of robust methods capable of dealing with pathological cases, where the structure of the retina can change drastically.

Several methods have been proposed to segment the vitreal-retinal boundary on macular OCT images;57 however, those models cannot simply be applied to the ONH region owing to the significant anatomical differences. Nerve head geometries vary considerably across the patient population, and appearance variation is also introduced by the angle of the scan through the ONH. This could be the reason why, to date, very little has been reported on the automatic analysis of 2-D ONH images.

Nowadays, volumetric scanning centered at the ONH is becoming more common; however, the segmentation method employed in these cases is based on a three-dimensional graph search approach that makes use of regional consecutive B-scans, and the algorithm is trained on the planimetry from color stereo photos to determine the rim and cup regions that can be then colocalized on the fundus image and finally translated to the OCT scan.8,9 More recently, a study performed with a software developed for retinal analysis of Straus OCT macular images with computer-aided grading methodology, OCTRIMA,1012 has included the measurement of the cup-to-disc ratio as complementary information extracted from the central image of a volumetric scan.13 Notwithstanding such advances, currently there are no commercial OCT devices that allow image sequences of volumetric scans acquired and saved at the velocity needed to perform dynamic studies.

Active contour algorithms are a good choice to achieve accurate border delimitation from where the inner limiting membrane (ILM) profile can be easily obtained.14,15 Although it is possible to define a fixed rectangular initial contour that encloses the whole retina and the main parameters can be optimized for a set of similar images, the number of iterations may vary depending on the size and shape of the ONH, with a computation time in the order of tens of seconds for a single image,16,17 making this kind of method inappropriate for large image sets.

In order to investigate the pulsatile dynamics of the ONH and surrounding tissue, it becomes necessary to have an image acquisition rate several times higher than the heart rate. OCT images are inherently noisy, and if we add the fact that in this case only a few consecutive B-scans can be merged for the final image, the result is a collection of images with poor quality, making most simple edge-detection algorithms unsuitable for accurate retinal identification. Furthermore, retinal deformation is in the order of dozens of microns, making inaccurate delineations a source of error for adequate assessment of the dynamic behavior of the tissue.

To overcome such limitations, we have developed a simple method for the automatic ILM segmentation in video rate OCT ONH line scan images, based on morphological operations. The resulting ILM profiles are then used to measure neuroretinal tissue displacements due to blood-flow pulsatility.

2.

Methods

The proposed approach consists of two stages: segmentation and analysis. The first seeks to delineate the peripapillary retina and the ONH accurately in low-signal OCT images, while the second uses the segmented profiles to quantify retinal deformation. All routines described in the following have been coded in Matlab (The MathWorks, Inc.) and are available upon request.

2.1.

Data Collection

In order to accurately measure displacements of the ocular fundus during the cardiac cycle in real time, we made use of a SD-OCT device (Spectralis OCT Plus, Heidelberg Engineering, Germany), the software of which has been custom-modified to record image series in real time, allowing the export of raw images in linear scale. OCT line scans of the same region of the ONH were acquired at 20 Hz for 20 s, during which subjects were asked to look at a fixation light. Since the device is equipped with an active eye tracker that halts acquisition to avoid movement artifacts, subjects are allowed to blink if necessary. Each time series has 401 images (496×768pixels) of high resolution (3.9μm axial) acquired over a 15 deg field of view (5 mm approx.) that covers cup, rim, and peripapillary retina, scanned at 45 deg with respect to the fovea to disc axis (Fig. 1). Volunteers covering the glaucoma spectrum, i.e., from glaucoma suspects to advance glaucoma, were recruited at the Maisonneuve-Rosemont Hospital’s Ophthalmology Clinic. Institutional Review Board and Ethics Committee approval was obtained for the study, and all participants gave their informed consent. The research protocol follows the tenets of the Declaration of Helsinki.

Fig. 1

Time series typical images. (a) Fundus image showing the scanned line through the optic nerve head (ONH), in this case at 45 deg. (b) Corresponding optical coherence tomography (OCT) image (two averaged frames).

JBO_20_11_116008_f001.png

In order to optimize the parameters of our segmentation pipeline, we presented a set of 25 images that include healthy and several stages of glaucomatous eyes randomly selected from the database, with Spectralis quality scores ranging from 10 to 40, to five observers. Images were segmented varying the values of the main parameters of the algorithm, described below, and raters, masked to parameters and values, were asked to choose the best segmentations for each “unknown” parameter, and the most voted values were selected and kept fixed for all images.

2.2.

Preprocessing

As a first step, a 2-D median filter of size 7×7pixels is applied for rough speckle noise removal, then images are submitted to a two-step contrast enhancement technique to suppress features that may lead to erroneous boundary detection. First, intensity exponentiation (I2) is applied to eliminate floaters near the cup and attachment points in case of vitreopapillary detachments. Afterwards, exponentiation (I3)+attenuation compensation is required to compensate inhomogeneities due to retinal blood vessels18 and to sharpen the edges (Fig. 2). In our particular case, since we have previously applied a median filter and intensity exponentiation (I2) to the images, when the exponent value is set to 3, the increment of speckle noise that could be expected according to Ref. 14 is not observed, proving to give the best results for edge detection.

Fig. 2

Preprocessing example. (a) Original ONH image showing floaters in the vitreous. (b) Same image after median filtering and contrast enhancement steps were applied.

JBO_20_11_116008_f002.png

2.3.

Inner Limiting Membrane Segmentation

In order to separate the inner layer of the retina from the vitreous, another median filter (7×7pixels) is applied. As the image has been previously submitted to contrast enhancement, a threshold of 10% of the maximum intensity is enough to discriminate between the retina and the remaining noise near the edges and is then used to generate the corresponding binary image. An increase of the threshold value results in the loss of the vitreal–retinal boundary information in most cases. The next step is to remove rough edges and bridges by morphological closing with a squared structuring element of 3 pixels width [Fig. 3(a)]. Pixel labeling of eight connected objects is then used to remove the remaining spurious pixels in the vitreous, usually corresponding to highly reflective floaters, by only keeping objects with more than 1000 elements.

Fig. 3

Example of some segmentation steps. (a) Binary image after morphological closing, as the vitreal floaters are not attached to the retina, they are removed after labeling. (b) Final ILM profile, the edge is detected as the first nonzero pixel in each A-scan and interpolated with smoothing splines.

JBO_20_11_116008_f003.png

The ILM is set to the most anterior foreground pixel of each A-scan. To exclude misallocated points at the edges of the cup, distances in the axial direction between adjacent contour points [c(x)] are calculated [Δzi=c(xi+1)c(xi)], and we search for distances of |Δz| that are equal to or greater than 27μm. When a negative targeted value is followed by a positive one and separated by maximum 10 points (transversal direction x) then such interval is removed and replaced with a spline-interpolating curve. Figure 3(b) shows the final profile.

2.4.

Deformation Analysis

In order to measure displacements of the neuroretinal tissue due to pulsatility, in addition to segmentation, an analysis algorithm has been developed to track the position of specific retinal regions in time (frame to frame) and to quantify tissue deformation. For alignment, we have used the Matlab intensity-based automatic image registration function with mean-squared error metric and one-plus-evolutionary optimizer. The registration is limited to rigid transformations (translation and rotation), taking as a reference the first image of the series.

Once the segmentation has been performed, the result is a set of 401 ILM profiles per time series. With the purpose of discarding the curves automatically where the ILM delineation has failed, all the profiles are shifted vertically so that their minima coincide [Fig. 4(a)], and the average and standard deviation (S.D.) of the depth (z) values at each A-scan position are calculated. Outliers are then tagged in two steps. First, we identify outlier pixels in each one of the profiles, defined as those points located at a distance of 3 S.D. or more from the average value in each A-scan. Next, the S.D. of the number of outlier pixels per curve is determined, and profiles with more outliers than such S.D. are discarded [Fig. 4(b)]. In general, the number of discarded profiles per time series is usually between 0 and 10 but not higher than 15.

Fig. 4

(a) Profiles resulting from segmentation of the whole time series plotted together after vertical shift. Some of the profiles show inaccurate segmentation at the bottom of the cup or the peripapillary retina; hence, two-step outlier identification is carried out. (b) Final profiles after outlier elimination process (10 profiles discarded).

JBO_20_11_116008_f004.png

The peripapillary retinal deformation can now be determined from the time series by quantifying changes in the distance measured from the prelaminar tissue to a selected region on the retina. Since the major blood vessels cross the optic nerve through the retina on the nasal region, we have decided to investigate the temporal side, where vessels are less dense, selecting three regions of 200μm (T, T2, T3) starting at the Bruch’s membrane opening, as shown in Fig. 5(a). As the bottom of the cup of all profiles is already located at z=0, the distance that we want to measure is simply z(x), and deformation is then calculated as the S.D. of such distances averaged within each region [Fig. 5(b)]. It is important to emphasize that although the deformation value obtained per time series could be on the order of the axial resolution of the device, the actual range of measured distances in the time series is between five and ten times larger.

Fig. 5

Deformation measurements: the green line in (a) sets the reference at the deepest point of the ONH, and the red arrow represents the distance measured to determine retinal deformation. Three distance values are calculated per frame, corresponding to the regions T, T2, and T3. (b) Deformation is defined as the standard deviation of the z(x) values of all profiles in each one of the regions (red rectangles).

JBO_20_11_116008_f005.png

As a summary, Fig. 6 shows a flow diagram of the segmentation and analysis algorithms proposed.

Fig. 6

Schematic representation of the ILM segmentation and deformation analysis algorithms to measure peripapillary retinal deformation from video-rate OCT image series of the optic nerve.

JBO_20_11_116008_f006.png

3.

Results

3.1.

Assessment of the Segmentation Algorithm

To evaluate the performance of the segmentation algorithm, we compared our method with two other established methods: a modified Canny filter19 (MCF) and a hierarchic approach20 (HA) briefly described below. For each method, the parameters were chosen to give the best results.

MCF: the preprocessing steps include median filtering with a squared mask of 13 pixels width and intensity normalization. Then, the first derivative of a Gaussian function multiplied by a Gaussian is used to create two convolution masks to estimate the gradients in both directions (σx=0.2, σy=4), and the resulting gradient image is thresholded with T=0.2*(ImaxImin)+Imin, with Imax and Imin, the maximum and minimum intensity values of the image being processed. Next, linear interpolation is used in a 3×3 neighborhood to find two pixels that straddle the gradient direction; the central pixel is then defined as an edge point if its gradient magnitude is greater than those of the interpolated points. Finally, the ILM is set as the first nonzero pixel per A-scan.

HA: intensity normalization, median filtering with a 5×5pixels mask, and image decomposition to a lower resolution with a 16-pixel block using the median constitute the preprocessing. Then, an auxiliary matrix that contains the intensity differences between adjacent depth positions in each A-scan is calculated and employed to define two binary images. The first one contains only the pixels with the highest difference value per A-scan, while a threshold arbitrarily set at 0.03 is applied to the auxiliary matrix to generate the second binary image. These two images are used to identify the borders of the retina, and to correct erroneous recognition, a set of constrictions must be fulfilled. The next steps consist on reducing the decomposition area by half on each iteration to increase accuracy; the positions of the layers determined in the previous iteration are connected with linear interpolation and individual point matching. As final steps, the layers are rescaled, a 5×5 median filter is applied to remove points misallocated in shadowed areas, and interpolation gives the resulting segmented profile.

These three approaches were applied to 25 ONH OCT images randomly chosen from the data set described in Sec. 2.1, with Spectralis quality scores ranging from 11 to 35. In order to validate our algorithm quantitatively, as there is no gold standard for ONH OCT images to compare with, five specialists were asked to delineate the ILM manually for the same set of images. Manual segmentation was performed using a tablet equipped with a stylus. For each image, the average manually segmented ILM was calculated and used to compare the accuracy of every specialist and automated method. Absolute deviation from the mean trace for every A-scan of all images is shown in Fig. 7 as violin plots.

Fig. 7

Quantitative evaluation of the proposed segmentation algorithm. A set of 25 ONH OCT images was manually segmented by five specialists, and the average ILM was used as gold standard for validation. The accuracy of segmentation was evaluated by the absolute deviation from the gold standard of each A-scan for all images. Results for each one of the automated methods are shown as violin plots. The white dot represents the mean value of each distribution.

JBO_20_11_116008_f007.png

Our algorithm shows variability comparable to the specialists’, highlighting its robust performance on low signal images. On the other hand, MCF failed to segment the vitreal-retinal boundary due to high speckle content on the images. Notwithstanding HA showed a more robust performance to noise than MCF, its border detection was considerably affected by shadow effects that reduce visibility of some retinal regions. Although MCF and HA show good results for high-quality images (Q>25), where a significant contrast at the vitreal-retinal boundary is observed, its accuracy on low signal images is not always satisfactory. Figure 8 shows some examples of the segmented results along with the original images.

Fig. 8

ILM automated segmentation results using different algorithms: a modified Canny filter (MCF), hierarchic approach (HA), and the proposed method. The second column shows the average manual ILM used as gold standard. Images shown have spectralis quality scores of 11, 15, 18, and 30 (from top to bottom).

JBO_20_11_116008_f008.png

The time of analysis, together with satisfactory results, is of crucial importance if we take into account that each time series comprises hundreds of images. The average ILM segmentation time for MCF, HA (dividing individual images on blocks of 16×16, 8×8, 4×4), and our algorithm is 9.4 s, 2.4 s, and 0.40 s, respectively, calculated over a set of 100 images (496×768) analyzed in a PC (3.30 GHz Intel Core i3 processor, 4 GB of RAM).

3.2.

Peripapillary Retinal Deformation

Results from 29 healthy and 12 glaucoma eyes with functional trabeculectomy blebs were obtained using the proposed segmentation and analysis algorithms. In this case, two time series of the same eye were recorded per subject, at 45 deg (inferotemporal region) and 135 deg (superotemporal) with respect to the fovea to disc axis. These particular angles were chosen as they correspond to the areas of major insult in glaucoma. All eyes were transformed to right-eye format.

Recorded time series were excluded due to fixation problems, poor quality images (Spectralis quality score<10 for most of the images), and large blood vessels governing the movement of the neuroretinal tissue. The latter exclusion criterion is vital as we could be measuring directly the pulsation of the vessel instead of the tissue displacement due to choroidal pulsatility, and includes time series where the ONH has a blood vessel located at the base of the cup.

For the sake of simplicity, peripapillary retinal deformation values per eye at each one of the angles correspond to the average value calculated over the three regions defined along the temporal side (T, T2, and T3) on the time series (see Fig. 5). The healthy group shows higher average deformation for both regions, 9.0±3.6μm inferotemporal and 8.8±6.6μm superotemporal, compared with 4.8±1.8μm and 6.9±3.5μm in the glaucoma bleb group. However, statistical difference was only found at the inferotemporal region (P=0.005) as can be seen in Fig. 9. Group differences were analyzed with the Mann–Whitney test. Mean group deformation values are listed in Table 1.

Fig. 9

Comparison of retinal deformation values between healthy and bleb eyes. Statistical difference was found only at the inferotemporal region (P=0.005).

JBO_20_11_116008_f009.png

Table 1

Mean peripapillary retinal deformation per group.a

GroupAgeInferotemporal Superotemporal
Healthy71.6 (11.13) y.o.9.0 (3.6)μm8.8 (6.6)μm
Bleb73.8 (10.72)4.8 (1.8)6.9 (3.5)

aValues shown are mean (SD).

This preliminary study shows that retinal displacement is different for healthy and treated diseased eyes. A more comprehensive characterization could help us to understand how neuroretinal tissue deformation is related to glaucoma progression.

4.

Discussion

Elevated intraocular pressure (IOP) remains the primary risk factor for glaucoma. By definition, IOP is the normal force per unit area exerted by the intraocular fluids on the tissues that contain them. The mechanical response is a function of the individual eye’s geometry, anatomy, and mechanical properties of the tissue, factors that contribute to determine the individual’s susceptibility to IOP. Therefore, it is natural to consider that biomechanics also plays an important role in glaucomatous neuropathy, and the key challenge is then to understand how ocular biomechanics are transduced into tissue damage.

The ONH is a region of special biomechanical interest because it is a discontinuity in the corneo-scleral shell, and such kind of discontinuities typically gives rise to stress or strain concentrations in mechanical systems.21 Furthermore, recent research based on finite element computational modeling has led to the conclusion that the highest magnitudes of all modes of strain (extension, compression, and shearing) occurred within the neural tissue regions and not within the lamina cribrosa,22 as previously believed.

Although the results section presents a preliminary study to show the potential use of our algorithm, it reveals that the actual mechanical response of the neuroretinal tissue is different between healthy and treated diseased eyes. Two of the factors that may explain the finding that glaucomatous eyes with functional trabeculectomy blebs show less deformation are that they have significantly lower IOP as well as thinner retinal nerve fiber layer than the normal group. Our results showed a statistical difference at the inferotemporal side only, which can be explained in part by the fact that the loss of neuroretinal rim starts specifically at this region being the most affected during the evolution of the glaucomatous neuropathy.23 Furthermore, in the glaucoma eyes with blebs, the ocular pulse amplitude is significantly lower than in healthy eyes;24 thus the driving force for ONH deformation was reduced which may be contributing to the smaller observed deformation. However, a more comprehensive characterization needs to be done before reaching any major conclusion. On the other hand, since the elasticity of the sclera is one of the most important determinants of ONH stress and strain,25 we have recently developed a noninvasive technique to determine overall ocular rigidity using depth-enhanced high-frequency OCT macular imaging26 that will be included in a further study, where characterization of the retinal deformation at different stages of the glaucomatous neuropathy will be carried out including a full workup of the eye under study.

The approach proposed in this paper has the potential to be used as a new biomechanical descriptor of the eye, helping in the understanding of the relationship between biomechanical properties of the neuroretinal tissue and the insult of the optic nerve, not only in glaucoma but, in principle, in many other eye pathologies.

5.

Conclusion

We developed a fully automated algorithm to segment the peripapillary retina and the ONH in video rate OCT images using morphological operations. We have demonstrated that our approach is able to accurately segment the ILM, even when images show a substantial amount of speckle noise or shadowing which is typical of high acquisition rates, proving to be very robust when dealing with low-signal images.

The experimental results showed that the algorithm detects the ILM accurately in healthy eyes as well as in several stages of glaucomatous pathology. While the focus of this work was on the segmentation of the ONH, the proposed algorithm can be directly applied to macular OCT images without any modification. Furthermore, the advantages include easy implementation and low computation time.

Additionally, we have developed an analysis algorithm that uses the segmentation results to investigate the dynamics of the neuroretinal tissue. As a preliminary study, peripapillary retinal deformation of healthy and glaucomatous eyes with functional trabeculectomy blebs has been characterized, with the first cohort showing larger deformation at the inferotemporal region. These findings could lead to a new way to identify patients with glaucoma and characterize their risk of aggressive ONH damage using a standard clinical test.

We strongly believe that the powerful combination of video rate OCT imaging and the proposed segmentation and analysis algorithms is key to the further understanding of ocular biomechanics, and how it is transduced into tissue damage.

Acknowledgments

The authors acknowledge support from the National Council of Science and Technology of Mexico (CONACyT), the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, the FRQS Research in Vision Network, and the Fond de Recherche en Ophtalmologie de l’Université de Montréal. S.C. holds an FRQS Chercheur Junior 2 award.

References

1. 

D. Huang et al., “Optical coherence tomography,” Science, 254 1178 (1991). http://dx.doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

2. 

H. A. Quigley, “Open-Angle Glaucoma,” N. Engl. J. Med., 328 (15), 1097 –1106 (1993). http://dx.doi.org/10.1056/NEJM199304153281507 Google Scholar

3. 

K. Singh et al., “Measurement of ocular fundus pulsation in healthy subjects using a novel Fourier-domain optical coherence tomography,” Invest. Ophthalmol. Vision Sci., 52 (12), 8927 –8932 (2011). http://dx.doi.org/10.1167/iovs.11-7854 Google Scholar

4. 

K. Singh et al., “Pulsatile movement of the optic nerve head and the peripapillary retina in normal subjects and in glaucoma,” Invest. Ophthalmol. Vision Sci., 53 (12), 7819 –7824 (2012). http://dx.doi.org/10.1167/iovs.12-9834 Google Scholar

5. 

T. Fabritius et al., “Automated segmentation of the macula by optical coherence tomography,” Opt. Express, 17 (18), 15659 –15669 (2009). http://dx.doi.org/10.1364/OE.17.015659 OPEXFF 1094-4087 Google Scholar

6. 

M. A. Mayer et al., “Retinal nerve fiber layer segmentation on FD-OCT scans of normal subjects and glaucoma patients,” Biomed. Opt. Express, 1 (5), 1358 –1381 (2010). http://dx.doi.org/10.1364/BOE.1.001358 BOEICL 2156-7085 Google Scholar

7. 

S. J. Chiu et al., “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express, 18 (18), 19413 –19428 (2010). http://dx.doi.org/10.1364/OE.18.019413 OPEXFF 1094-4087 Google Scholar

8. 

M. D. Abramoff et al., “Automated segmentation of the cup and rim from spectral domain OCT of the optic nerve head,” Invest. Ophthalmol. Vision Sci., 50 (12), 5778 –5784 (2009). http://dx.doi.org/10.1167/iovs.09-3790 Google Scholar

9. 

K. Lee et al., “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging, 29 (1), 159 –168 (2010). http://dx.doi.org/10.1109/TMI.2009.2031324 Google Scholar

10. 

D. C. Fernandez, H. M. Salinas and C. A. Puliafito, “Automated detection of retinal layers structures on optical coherence tomography images,” Opt. Express, 13 (25), 10200 –10216 (2005). http://dx.doi.org/10.1364/OPEX.13.010200 Google Scholar

11. 

D. C. DeBuc et al., “Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software,” J. Biomed. Opt., 14 (6), 064023 (2009). http://dx.doi.org/10.1117/1.3268773 JBOPFO 1083-3668 Google Scholar

12. 

D. Cabrera Debuc et al., “Improving image segmentation performance and quantitative analysis via computer-aided grading methodology for optical coherence tomography retinal image analysis,” J. Biomed. Opt., 15 (4), 046015 (2010). http://dx.doi.org/10.1117/1.3470116 JBOPFO 1083-3668 Google Scholar

13. 

Y. Wang et al., “Quantitative analysis of the intraretinal layers and optic nerve head using ultra-high resolution optical coherence tomography,” J. Biomed. Opt., 17 (6), 066013 (2012). http://dx.doi.org/10.1117/1.JBO.17.6.066013 JBOPFO 1083-3668 Google Scholar

14. 

T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., 10 (2), 266 –277 (2001). http://dx.doi.org/10.1109/83.902291 IIPRE4 1057-7149 Google Scholar

15. 

C. Li et al., “Level set evolution without re-initialization: a new variational formulation,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 430 –436 (2005). Google Scholar

16. 

W. Yu et al., “Fast and robust active contours for image segmentation,” in Proc. IEEE Int. Conf. on Image Processing (ICIP), (2010). Google Scholar

17. 

R. Ksantono, B. Boufama and S. Memar, “A new efficient active contour model without local initialization for salient object detection,” EURASIP J. Image Video Process., 2013 40 (2013). http://dx.doi.org/10.1186/1687-5281-2013-40 Google Scholar

18. 

M. J. A. Girard et al., “Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head,” Invest. Ophthalmol. Vision Sci., 52 (10), 7738 –7748 (2011). http://dx.doi.org/10.1167/iovs.10-6925 Google Scholar

19. 

R. Koprowski and Z. Wrobel, “Identification of layers in a tomographic image of an eye based on the canny edge detection,” Information Technologies Biomedicine, 47 232 –239 Springer-Verlag, Berlin, Heidelberg (2008). Google Scholar

20. 

R. Koprowski and Z. Wrobel, “Hierarchic approach in the analysis of tomographic eye image,” Computer Recognition System 3, 57 463 –470 Springer-Verlag, Berlin, Heidelberg (2009). Google Scholar

21. 

A. J. Belleza and R. T. Burgoyne, “The optic nerve head as a biomechanical structure: initial finite element modeling,” Invest. Ophthalmol. Vision Sci., 41 (10), 2991 –3000 (2000). Google Scholar

22. 

I. A. Sigal et al., “Predicted extension, compression and shearing of the optic nerve head tissues,” Exp. Eye Res., 85 (3), 312 –322 (2007). http://dx.doi.org/10.1016/j.exer.2007.05.005 EXERA6 0014-4835 Google Scholar

23. 

J. B. Jonas, M. C. Fernandez and J. Sturmer, “Pattern of glaucomatous neuroretinal rim loss,” Ophthalmology, 100 (1), 63 –68 (1993). http://dx.doi.org/10.1016/S0161-6420(13)31694-7 OPANEW 0743-751X Google Scholar

24. 

C. Breusegem et al., “The effect of trabeculectomy on ocular pulse amplitude,” Invest. Ophthalmol. Vision Sci., 51 (1), 231 –235 (2010). http://dx.doi.org/10.1167/iovs.09-3712 Google Scholar

25. 

I. A. Sigal, J. G. Flanagan and C. R. Ethier, “Factors influencing optic nerve head biomechanics,” Invest. Ophthalmol. Vision Sci., 46 (11), 4189 –4199 (2005). http://dx.doi.org/10.1167/iovs.05-0541 Google Scholar

26. 

L. Beaton et al., “Non-invasive measurement of choroidal volume change and ocular rigidity through automated segmentation of high-speed OCT imaging,” Biomed. Opt. Express, 6 (5), 1694 –1706 (2015). http://dx.doi.org/10.1364/BOE.6.001694 BOEICL 2156-7085 Google Scholar

Biography

Maribel Hidalgo-Aguirre received her BSc and MSc degrees with honors in applied physics from the Benemerita Universidad Autonoma de Puebla, Mexico in 2007 and 2010 respectively, with research experience in experimental physics, in the area of nonlinear optics for applications in fields like microscopy and telecommunications. Currently she is a PhD student at the Institute National de la Recherche Scientifique in Canada, working at the Maisonneuve-Rosemont hospital on image analysis for ophthalmology.

Julian Gitelman completed his MSc at McGill University studying the molecular mechanisms of learning and memory. He is currently a medical student at McGill.

Mark Richard Lesk: Biography is not available.

Santiago Costantino received his PhD in ultrafast lasers from the Physics Department of the University of Buenos Aires in 2003. He moved to Canada for his postdoctoral training in microscopy and neuroscience at McGill University. He established his biophotonics lab at the Maisonneuve-Rosemont Hospital, University of Montreal, in 2007. He is now an associate professor and his current research spans microengineering, image analysis and the development of optical tools for vision health.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2015/$25.00 © 2015 SPIE
Maribel Hidalgo-Aguirre, Julian Gitelman, Mark R. Lesk, and Santiago Costantino "Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography," Journal of Biomedical Optics 20(11), 116008 (24 November 2015). https://doi.org/10.1117/1.JBO.20.11.116008
Published: 24 November 2015
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Optical coherence tomography

Optic nerve

Head

Tissues

Retina

Algorithm development

RELATED CONTENT


Back to Top