# Comparison of flash lidar detector options

**Paul F. McManamon**

Exciting Technology LLC, Dayton, Ohio, United States

**Paul Banks**

TetraVue, San Marcos, California, United States

**Jeffrey Beck**

DRS Network & Imaging Systems, LLC, Dallas, Texas, United States

**Dale G. Fried**

3DEO, Inc., Dover, Massachusetts, United States

**Andrew S. Huntington**

Voxtel Inc., Beaverton, Oregon, United States

**Edward A. Watson**

Vista Applied Optics, LLC, Dayton, Ohio, United States

*Opt. Eng*. 56(3), 031223 (Mar 07, 2017). doi:10.1117/1.OE.56.3.031223

#### Open Access

**Abstract.**
Three lidar receiver technologies using the total laser energy required to perform a set of imaging tasks are compared. The tasks are combinations of two collection types (3-D mapping from near and far), two scene types (foliated and unobscured), and three types of data products (geometry only, geometry plus 3-bit intensity, and geometry plus 6-bit intensity). The receiver technologies are based on Geiger mode avalanche photodiodes (GMAPD), linear mode avalanche photodiodes (LMAPD), and optical time-of-flight lidar, which combine rapid polarization rotation of the image and dual low-bandwidth cameras to generate a 3-D image. We choose scenarios to highlight the strengths and weaknesses of various lidars. We consider HgCdTe and InGaAs variations of LMAPD cameras. The InGaAs GMAPD and the HgCdTe LMAPD cameras required the least energy to 3-D map both scenarios for bare earth, with the GMAPD taking slightly less energy. We comment on the strengths and weaknesses of each receiver technology. Six bits of intensity gray levels requires substantial energy using all camera modalities.

A flash imaging lidar is a laser-based 3-D imaging system in which a large area is illuminated by each laser pulse and a focal plane array (FPA) is used to simultaneously detect light from thousands of adjacent directions. Mapping and 2-D/3-D imaging are examples of applications for such systems. To make these systems as robust as possible, and to reduce the amount of laser power required, receivers in flash lidar systems typically employ some form of gain. One approach is to provide gain in the incident optical signal (photon gain, one example being fiber amplifiers). Another approach, which is a major subject for this paper, is charge gain inside the detector after photon detection has occurred.

Charge gain processes inside detectors exploit the ability to accelerate charged particles in an applied electric field to amplify the number of charge carriers through energetic collisions. One example is photoemissive detectors in which a primary electron generated by the incident absorbed photon is liberated from the detector photocathode, accelerated through an evacuated space by an applied electric field, and then impacted on a target material, generating additional secondary charge carriers from the primary carrier’s kinetic energy. A second type of detector charge gain process is impact ionization inside an avalanche photodiode (APD) in which the primary photoelectrons do not leave the detector material but undergo ionizing collisions within the semiconductor crystal in a high-electric field region of a reverse-biased diode junction.

We analyze two classes of APDs as lidar detectors: linear mode APDs (LMAPDs) and Geiger mode APDs (GMAPDs). LMAPDs are operated below their breakdown voltage, generating current pulses that are on average proportional to the strength of the optical signal pulse. LMAPDs normally operate continuously and are used with high-gain current or charge amplifiers that develop an output voltage waveform that is proportional to the LMAPD’s photocurrent waveform. By contrast, GMAPDs are armed by biasing them above their breakdown voltage, rendering them sensitive to single primary charge carriers. Absorption of one or several photons triggers avalanche breakdown of the GMAPD junction, generating a strong current pulse that is easily sensed, the amplitude of which is limited by a quenching circuit. Immediately following breakdown, the GMAPD’s quenching circuit momentarily reduces the applied reverse bias below the GMAPD’s breakdown voltage, terminating the avalanche process and allowing trapped carriers to clear the junction before rearming the GMAPD. If the GMAPD is armed to soon after, pulsing will occur, resulting in false signals. Generally speaking, GMAPDs are sensitive to weaker signals than most LMAPDs, but LMAPDs can directly measure signal return amplitude and can resolve optical pulses separated by as little as a nanosecond, depending on laser pulse width and the APD’s linear gain. Certain high-gain LMAPDs, chiefly electron-avalanche HgCdTe APDs, provide enough linear gain to detect single photons without entering avalanche breakdown.

The GMAPDs considered here, and one of the two types of LMAPD, are manufactured with InGaAs light-absorption layers responsive in the short-wavelength infrared (SWIR) and are typically thermoelectrically (TE)-cooled. Single-photon detection efficiency (SPDE) of 25%, dead time of $1\u2009\u2009\mu s$ following breakdown, and dark count rate (DCR) of about 6 kHz at 225 K are typical of the $25-\mu m$-diameter GMAPD pixels for which calculations are made; although not sensitive at 1550 nm, $128\xd732$-format arrays of $18-\mu m$ GMAPD pixels have been reported. These arrays operate with 32.5% SPDE and 5 kHz DCR at 253 K due to the use of a wider bandgap the InGaAsP absorption layer optimized for 1064-nm signal detection.^{1} Interframe timing jitter of the 1064-nm-sensitive $128\xd732$-format GMAPD array was reported to be about 500 ps, which may have been dominated by clock signal distribution issues in its readout integrated circuit (ROIC) rather than the fundamental timing performance of the GMAPD pixels themselves; timing jitter for $32\xd732$-format arrays of 1550-nm-sensitive pixels was reported to be in the 150- to 200-ps range.^{1} The $30-\mu m2$ InGaAs LMAPD pixels analyzed typically operate at linear gain $M=20$ with 0.2-nA dark current at 273 K, quantum efficiency (QE) of 80%, and an excess noise factor ($F$) parameterized by ionization coefficient ratio $k=0.2$, resulting in $F=5.56$ at $M=20$. Multistage InGaAs LMAPDs that operate at gains approaching $M=1000$ with excess noise parameterized by $k=0.04$ have been reported, but they are not a mature technology.^{2} Low excess noise LMAPDs made from AlInAsSb^{3} and InAs^{4} have also been reported, but, among the high-gain LMAPDs, electron-avalanche HgCdTe LMAPDs are the most mature. HgCdTe LMAPDs can be manufactured to respond efficiently from the ultraviolet (UV) to the mid-wavelength infrared (MWIR) and can have high linear gains up to 1000 or more, maintaining an excess noise factor $F$ near 1. The $64-\mu m$ HgCdTe LMAPD pixels for which calculations are made can operate at linear gains over $M=1000$ but are analyzed at $M=200$, for which the dark current at 100 K is 0.64 pA, $QE=65%$, and $F=1.3$. The two disadvantages of HgCdTe LMAPDs are the need to cool HgCdTe to near 100 K and the cost.

We also consider low bandwidth (BW) detectors. These are often used for passive sensors. There are, however, 2-D gated lidar detector arrays, such as the Intevac camera. There are 3-D imagers that use a Pockels cell to obtain timing to measure range using time insensitive 2-D imaging arrays, sometimes called optical time-of-flight (OToF) lidars. Last, there are spatial heterodyne, or more broadly digital holography, based uses for these cameras in active imaging. In this paper, the only 2-D cameras we consider will be used in conjunction with the OToF 3-D imagers.

This paper quantitatively compares these detector modalities, using the metric of total energy required to 3-D map two scenarios, with various assumptions for each scenario. To our knowledge, this is the first quantitative comparison between these detector modalities. The most comprehensive comparison prior to this work was part of the 2014 National Academy of Science Report, Laser Radar: Progress and Opportunities in Active Electro-Optical Sensing, chaired by McManamon et al.^{5} Prior to that, there were two comparison papers.^{6}^{,}^{7}

To compare lidar receiver technologies, we will define a set of imaging tasks accomplished using direct detection systems. The primary figure of merit will be the amount of laser illumination energy needed to accomplish the imaging task. Each imaging task is defined using one of two possible collection geometries (near or far), one of two possible scene types (partially obscured or not obscured), and one of three possible data product types (geometry only, geometry plus 3-bit target reflectance, and geometry plus 6-bit target reflectance). The detectors differ in terms of the total time and number of laser shots required to perform the imaging tasks; some requiring accumulation of repeat observations over multiple laser shots. These metrics are relevant to imaging dynamic scenes that change spatial configuration over time, but such a comparison is beyond the scope of the present analysis.

To compare camera types, we define two direct detection scenarios. The near scenario has a large detector angular subtense (DAS), and the far does not. The large DAS case has more of a background photon issue with sun background during a clear, blue sky day. We can define how much energy is required to 3-D map with a bare earth return and no grayscale, how much it will take to 3-D map with returns from three ranges in a given pixel, and how much energy it will take to 3-D map with grayscale (3 bit or 6 bits). Table 1 specifies the two collections scenarios used in this paper. We envision an aircraft flying at height $R$ above the ground, looking straight down. The receiver aperture can be made smaller if warranted by design trade considerations, but it must not exceed max aperture diameter. The stated range precision must be achieved with a probability of at least 90%.

Parameter | Near | Far | Units |
---|---|---|---|

Range (and altitude) | 200 | 1000 | m |

DAS | 2.5 | 0.1 | mrad |

GSD | 0.5 | 0.1 | m |

Max aperture diameter | 25 | 100 | mm |

Range precision | 25 | 5 | cm |

Image size | $128\xd7128$ | $1024\xd71024$ | pixels |

Image size on ground | 64 | 102.4 | m |

Operational lidars image objects that are unobscured as well as objects that are partially obscured by foliage, or other surfaces, between the sensor and the object are imaged. Because objects under forest canopy are imaged by only those rays that have a clear line-of-sight (LoS) from the sensor, we use the term “foliage poke through” instead of “foliage penetration.” Light incident upon leaves and branches is absorbed or scattered but does not penetrate. When the holes through the canopy are small compared to the projection of a sensor pixel on the canopy, received light for a given pixel can come from multiple ranges. We adopt a simple model that ignores diffraction effects, the relative motion of the aircraft between pulse transmission and detection, and partial blockage by nonparallel light. We note that if the detector pixel FoV is very small, then the characteristic sizes of the holes through real forest canopy might be larger than the projected size of the pixel at the ground. In that case, each pixel sees a single unobscured layer in the canopy or the ground instead of the multiple layers described here. This condition has implications for OToF and GMAPD lidars. Usually, reflectivity from the foliage canopy will be higher than from the ground or manmade targets. For this paper, we assume that the top two surfaces in a pixel have reflectivity $\rho c=3\rho g$, where the ground reflectivity is assumed to be $\rho g=0.10$, but we assume that the cross-section from each range in the pixel is the same. That means each of the two closer reflections contains less pixel area. In a mixed pixel then, each range has a cross-section, $\sigma $, of

In direct detection systems, two types of information are typically recovered. One is the range from the sensor to the target on a pixel by pixel basis, which is often called a 3-D point cloud image. Here, the range information is gathered (often through some form of timing circuitry) as a function of position on the receiver focal plane. Hence, the contrast information that is provided from pixel to pixel is a variation in range. The other type of information that can be gathered is reflectance, e.g., irradiance, often called grayscale. The contrast from pixel to pixel in this case is derived by quantifying the energy deposited on each pixel, which is related to the reflectivity of the surface illuminated by the laser. We are interested in determining the number of photodetections required for each of three types of data products: geometry only (i.e., just a point cloud), geometry plus reflectivity measured with a resolution of $Nbits=3\u2009\u2009bits$, and geometry plus reflectivity measured with a resolution of $Nbits=6\u2009\u2009bits$. Once we have the number of photodetections for each sensing modality, we can use that information and a standard link budget approach to calculate total energy required for each modality.

Our approach for active grayscale measurement using laser illuminator photons is as follows. We divide the distribution into a defined number of reflectivity levels (gray levels). We assume that object reflectivities range between a minimum of $\rho min=0.05$ and a maximum of $\rho max=0.15$, as illustrated in Fig. 1. Then, the lidar system is required to discern a reflectivity bin size of $\u2208=(\rho max\u2212\rho min)/2N\u2009bits/\rho max=0.0104$ for the 6 bit case or 0.0833 for the 3 bit case (i.e., the reflectivity intervals are eight times wider). We must be able to distinguish between one gray level and another, even in the presence of noise in the lidar receiver. Our lidar measurements are done with enough SNR so that there is a 90% probability of assigning the target reflectivity to the correct bin, $Pc=0.9$. All measurement modalities are subject to shot noise arising from the fact that the quantization of the received light obeys Poisson statistics. Other sources of instrument noise and distortion will add to this minimum noise level. An example of the reflectance bins and the effects of shot noise is indicated in Fig. 2 for the simple case of $Nbits=3$ and $Pc=80%$. The eight grayscale bins are indicated by the vertical black dashed lines. The colors represent eight different mean numbers of events, from 242.6 (dark blue) to 727.9 (dark red). These different mean number of returns could indicate the number of received photons from different reflectivity targets. The solid lines indicate the cumulative distribution function of the results of 2000 random trials. The dashed colored lines indicate the Poisson distribution function for each mean number of events. The shot noise is widest at the highest reflectivity, so it is this limit that sets the minimum required number of received photons.

While, for this paper, we have only picked conditions that might exemplify an advantage to one detection mode or another, it is interesting to see the effect that different levels of grayscale have on imagery. This can be seen in Fig. 3 for grayscale ranging from 1 to 6 bits.

For LMAPDs, mixed pixels in range will not create measurement issues so long as the detector has enough dynamic range and can record reflections from multiple ranges. For GMAPDs, there is a need to keep the probability of avalanche low on the initial returns, or later range returns will be blocked by the dead time of the GMAPD after an avalanche. For the OToF approach, a mixed pixel provides an average range value, not multiple range values. The OToF approach is, however, likely to have much larger format arrays, so it may have a number of smaller DAS detectors making up one required DAS for our scenarios. Smaller DAS pixels making up one of our larger pixels may have an unobstructed view through the canopy. This same effect could be prevalent when a GMAPD camera uses smaller DAS pixels to mitigate background effects, although the calculations done later in the paper for GMAPD assume mixed pixels rather than single range small pixels.

Many system assumptions are common to all of the scenarios analyzed, as shown in Table 2. We will assume a visibility of 23 km, which takes out most of the atmospheric attenuation factor because at $1.55\u2009\u2009\mu m$, this results in a $\beta $ of 0.00011. We will assume an average 10% Lambertion reflectivity, a bright sunny day, and a spectral band-pass filter as narrow as 1 nm. The operating wavelength will be 1550 nm.

Bare earth | Units | ||
---|---|---|---|

Visibility | 23 | 23 | km |

Depth of range gate | 50 | 100 | m |

Number of range returns | 1 | 3 | |

Reflectance | 10 | $30/30/10$ | % |

Wavelength | 1550 | 1550 | nm |

The model described by McManamon et al.^{8} was used as a basis for our treatment of the solar background. In this paper, we assume a variable width filter that is adjusted based on the sensor field-of-view (FoV). The filter width can be as low as 1 nm, but, as the acceptance angle becomes larger, we need to increase the angular acceptance width of the filter.

Commercially available narrow-band filters can be placed in the receiver optical path to block unwanted background light. We assume that the narrowest achievable BW for a reasonable cost is $\sigma min=1\u2009\u2009nm$ for collimated light at normal incidence (for example, Alluxa offers a filter width of 0.7 nm for collimated light at $1064\u2009\u2009nm2$). As the sensor FoV is increased and rays at larger angles from the optical axis need to be accommodated, the range of effective wave vectors that must be accommodated widens; the wider filter BW passes more scene luminance, introducing more noise. The shift of the resonance wavelength with angle can be modeled as a Fabry–Perot resonator, as given in Eq. (2) and Fig. 4:

Figure 4 indicates the required filter BW for typical material effective index $neff=2$. The widest sensor FoV occurs when a single array images the entire area; the angular distance to the corner of the array is

Our comparison of detector technologies requires that the lidar can be operated in full sunlight. Table 3 gives the number of photons from the sun captured in each DAS per nanosecond, using a 1.0-nm wide filter. Background photon rates for wider spectral filters are obtained by linear scaling from this table. Table 4 then provides the number of background photons from the sun for specific cases of interest in this analysis.

Background photons from the sun | |||||||||
---|---|---|---|---|---|---|---|---|---|

Wavelength | Radiance per square meter of the sun surface | Radiance from total sun area | Radiance over 1 sq meter on earth | Radiance per nm on earth (W) | No. of photons per nm per sq m on earth | No. of photons per second per nm sq meter on earth per ns | Receiver diameter (mm) | DAS (radians) | Captured no. of photons per ns in 1 nm |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 100.0 | 0.0001 | 0.0024 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 25.0 | 0.0025 | 0.0931 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 10.0 | 0.0025 | 0.0149 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 5.0 | 0.0025 | 0.0037 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 2.5 | 0.0025 | 0.00093 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 25.0 | 0.0005 | 0.0037 |

1.55 | $3.65E+12$ | $2.22E+31$ | $1.748E+08$ | 0.175 | $1.36E+18$ | $1.36E+09$ | 25.0 | 0.00025 | 0.00093 |

Detector format | Aperture diameter | Pixel FoV | Acceptance cone angle | Filter BW | Range | Background rate | |
---|---|---|---|---|---|---|---|

$Nx$ | $Ny$ | (mm) | (mrad) | (rad) | (nm) | (m) | ($photons/\mu s$) |

128 | 128 | 100 | 0.10 | 0.009 | 1.00 | 1000 | 2.80 |

32 | 128 | 100 | 0.10 | 0.007 | 1.00 | 1000 | 2.80 |

128 | 128 | 25 | 2.50 | 0.226 | 9.78 | 200 | 1108.48 |

32 | 128 | 25 | 2.50 | 0.165 | 5.23 | 200 | 592.75 |

32 | 128 | 25 | 1.25 | 0.082 | 1.32 | 200 | 37.25 |

32 | 128 | 25 | 0.63 | 0.041 | 1.00 | 200 | 7.08 |

32 | 128 | 25 | 0.31 | 0.021 | 1.00 | 200 | 1.77 |

32 | 128 | 25 | 0.22 | 0.015 | 1.00 | 200 | 0.89 |

32 | 128 | 25 | 0.16 | 0.010 | 1.00 | 200 | 0.44 |

32 | 128 | 25 | 0.08 | 0.005 | 1.00 | 200 | 0.11 |

###### Link Budget Calculations to Determine the Required Laser Energy, Once the Required Number of Photons per Pixel is Known

For each modality, we use the same link budget equations to determine how much energy per pulse we will need for the scenarios, based on how many photons reach each detector. A 2012 review^{9} article shows

For 23-km visibility, $\beta \u22121$ is $\u224823\u2009\u2009km$ at 1550 nm.^{10} The required received energy, $ER$, can be specified as the energy in $N$ photons. We assume a wavelength of 1550 nm, for which

We assume a system efficiency through the optical train of $\eta sys=60%$. We assume the area illuminated is 1.1 times as large as the angular area covered by our detectors to allow for some illumination inefficiency. This area grows with range. The cross-section is the reflectivity, $\rho g$, times the area seen by a given detector, which also grows with range:

This is similar to Eq. (1), but with no foliage poke through, so the whole pixel is viewed. The ratio of area illuminated to cross-section is

For each modality, we can then calculate how much energy is required to map the area in each of the scenarios based on that detector’s required value of $N$. The energy calculated by Eq. (10) is only for the number of pixels covered by a single detector array. For example, in the case of the GMAPD, we use a $32\xd7128$ detector array. In our near-range scenario, the $128\xd7128\u2009\u2009pixel$ scene would be covered by stepping the GMAPD array’s FoV four times and the energy computed by Eq. (10) would be multiplied by a factor of 4; in our far-range scenario, we have $1024\xd71024\u2009\u2009pixels$, so the energy calculated by Eq. (10) would need to be multiplied by a factor of 256. For the near-range GMAPD case, if we use a smaller DAS to alleviate solar background, the number of required steps would increase commensurately. Multiple flash images of the same area of the scene may be required to collect geometry and/or grayscale data of the precision required by each scenario, depending on detector type. For example, GMAPD cameras are often designed to have low probability of detection per pulse, with the image built up by accumulation of multiple pulses against the target. The number of laser shots required per array step across the scene also multiplies the result of Eq. (10) when computing the total energy required by a given detector for a given range scenario and data product.

Lidar systems using arrays of GMAPDs were first proposed by Marino^{11}^{,}^{12} and demonstrated by MIT Lincoln Laboratory.^{13} Development work has continued to advance the technology for Geiger-mode ladar components, systems, data processing, and data exploitation in many research groups. Figure 5 shows a structure for a GMAPD detector.^{14} Our analysis relies on previous work by Fouche,^{15} who analyzed signal requirements in the presence of background noise. Recent modeling by Kim et al.^{16} provides a detailed description of example system behavior. We restrict our analysis to commercially available Geiger mode cameras. We consider commercial framing cameras with up to 186-kHz frame rate for the $32\xd732$ or up to 110 kHz for the $32\xd7128$ format. An asynchronous readout $32\xd732$ is also now commercially available. This is capable of even higher readout rates limited only by the dead time between detections. In GMAPDs, the detector is biased above the breakdown voltage, so a photoelectron generated in the absorber region will result in a large avalanche, often resulting in a voltage fluctuation on the order of 1 V. If one photon, or many photons, hit the detector, the same large avalanche occurs. There is a dead time of 400 ns to $1\u2009\u2009\mu s$ after each triggered event, which can block detection of photons arriving later unless the probability of avalanche is kept low. For the case with foliage poke through, we set the average number of photons per pixel to be 0.8 photons returned for the expected range and reflectivity of the target or a 20% probability of detection per pulse given a PDE of 25%. With GMAPDs, there is crosstalk between detectors that is caused when a photon emitted during breakdown of one pixel triggers breakdown in another pixel. The noise due to crosstalk tends to be concentrated in the range region, where most of the detections occur. Even there, crosstalk noise is much smaller than noise due to background light for the cases analyzed in this paper. GMAPD flash imaging lidars tend to be designed to run at high frame rates, and many samples are used to capture the necessary number of photodetection events to achieve the signal level requirements. Laser pulse energy is lower, the number of photoelectrons generated per pulse is low, and the probability of a pixel firing is low. This has the technical benefit of keeping peak laser intensity low since each pulse is weak while maintaining high average power. This means that when we calculate required energy to 3-D image in a region, the main thing we vary will be the number of pulses, not the energy transmitted per pulse.

There are multiple detection events that can trigger a GMAPD receiver: the detection of a desired target photon, the detection of an undesired foreground clutter photon (such as backscatter from foliage), the detection of undesired background radiation (such as the sun), or the undesired detection of a dark electron. Cross talk can also trigger a GMAPD. If we send out many laser pulses, we will get coincident returns (returns in the same range bin) for reflection from a target or from fixed foreground objects, but returns from dark current, background, fog, snow, or rain will provide distributed returns with very low probability of range coincidence.

One of the first things to address for GMAPDs is if background from the sun will affect either of the two scenarios. We conclude that it will not significantly affect the small DAS case but will significantly affect the large DAS case. Solar background is detrimental in two ways: blocking and noise, with blocking as the more important for this analysis. If the GMAPD undergoes an avalanche before the signal photons arrive, the detector is “blocked” and is unable to detect the signal until after the dead time. On the other hand, noise can cause the system to erroneously declare a surface to be present. Given a background photon rate per pixel of $\gamma $ taken from Table 4, a PDE of 25%, and a gate width $W$ during which the APD is sensitive, the mean number of photoelectrons generated in the APD by the sun before the signal occurs is

For a gate width $W=100\u2009\u2009m$, the background must be below $\gamma =1.33\u2009\u2009MHz$. Clearly, DCRs, which are typically 1 to 10 kHz, can be neglected.

The background photon rate can be limited by introducing attenuation on the receiver, reducing the aperture, or increasing the focal length and therefore reducing the pixel DAS. The GMAPD community prefers to increase the focal length while maintaining the aperture diameter to reduce blocking loss. The disadvantage of decreasing DAS instead of aperture diameter is that we then must scan more locations to develop the FoV required by the scenario. This will probably increase collection time.

In line 2 of Table 5, we take values from line 2 of Table 3. We see in Table 3, line 2, that if we have a 25-mm diameter aperture and a DAS of 2.5 mrad, then we have $0.0931\u2009\u2009photons/s$. In Table 5, we see for that case the sun completely blocks our detector, showing 0% probability of not having an avalanche. This is the baseline case for our near-range, large DAS, scenario. From line 4 of Table 3, we have a gate width $W=50\u2009\u2009m$ and $0.00373\u2009\u2009photons/ns$. In that case, we will have a 70% probability of not being blocked if we either reduce our aperture diameter from 25 to 5 mm while keeping the DAS at 2.5 mrad or if we reduce the DAS to 0.5 mrad while maintaining a 25-mm diameter receive aperture. In either case, the result is the same in terms of sun blockage. For the gate width $W=100\u2009\u2009m$, we can either reduce the aperture size to 2.5 mm in diameter or the DAS to 0.25 mrad to avoid sun blocking. The smaller DAS case can use a narrower filter width, so that is one reason it will result in lower energy than decreasing the receive aperture size, and, of course, it provides higher resolution. Innovative processing will also provide a significant advantage for reducing the DAS compared to reducing the receive aperture diameter. In the next section, we will talk about coincidence processing, which is used by GMAPDs to achieve the required 90% probability of detection. If we reduce the DAS by a factor of 5 in each dimension, then each of our $0.5\xd70.5\u2009\u2009m\u2009pixels$ is made up of 25 $0.1\u2009\u2009m\xd70.1\u2009\u2009m\u2009pixels$. For surfaces that are smoothly varying, we can use these 25 samples to do coincidence processing, resulting in the need for as much as $25\xd7$ fewer pulses. This will reduce required energy for mapping the area.

Probability of avalanche | |||||||
---|---|---|---|---|---|---|---|

$Photons/ns$ | Range bin width (ns) | No. of photons per range bin | QE | Probability (%) | No. of bins | Range window width (m) | Probability of not having avalanched after number of bins |

0.00238 | 3.3333 | 0.007933254 | 25.00% | 0.2 | 200 | 100.00 | 63% |

0.0931 | 16.6670 | 1.55219771 | 25.00% | 36.0 | 20 | 50.00 | 0.00% |

0.0149 | 16.6667 | 0.24833383 | 25.00% | 6.9 | 20 | 50.00 | 24.00% |

0.00373 | 16.6667 | 0.062166791 | 25.00% | 1.8 | 20 | 50.00 | 70.00% |

0.00373 | 16.6667 | 0.062166791 | 25.00% | 1.8 | 40 | 100.00 | 49.00% |

0.00134 | 16.6667 | 0.022333378 | 25.00% | 0.6 | 40 | 100.00 | 77.00% |

When using GMAPDs in a foliage poke through scenario, we keep the probability of detection from a single pulse low because of the dead time after an avalanche (e.g., $Pdet=0.2$). This preserves our ability to see objects farther in range then the initial return. Sometimes people use even lower probability of detection, such as 0.1. If we do not have mixed pixels with multiple range returns, we can allow the probability of detection to increase. For GMAPDs, we want to determine the number of pulses, $Np$, that must be transmitted to cause a GMAPD pixel to fire on $M$ pulses scattered from the surface of interest (we anticipate that M will be a minimum of two or three detections from the surface of interest). This coincidence detection will determine a real return from a physical object, as compared to a random false return. We rely on the fact the noise is randomly distributed in time, whereas returns from real objects only occur at the range of an object. We ignore nonuniform detector illumination and sensitivity.

The probability $Po$ of detecting a photon backscattered from the object of interest can be expressed as a conditional probability:^{17}

^{11}

The value for $Po$ can be calculated once the parameters of the lidar system are specified. However, some insight can be obtained without considering a specific system configuration. To do this, we recast Eq. (15) in the following manner:

Since we desire maximizing the number of detections from the object of interest rather than the obscurations/false counts, we desire maximizing the value of $Po$ given the constraint that $rP(o|n\xaf)<1$, where $r$ is the ratio of near-range reflected light to target reflected light. As a reminder, $Po$ is the probability of detecting a photon from the object of interest, whereas $P(o|n\u203e)$ is the probability of detecting a photon from the object of interest with no obscuration. For our foliage poke through example, we have twice as much near-range reflections as we do target reflections, with the last return considered the target. In that case, $r=2$. Two-thirds of the return flux come from the foreground surfaces and one-third from the final surface. We maximize the probability of detecting a photon from the object of interest by differentiating Eq. (17) with respect to $P(o|n\xaf)$ and setting the derivative equal to zero. We find that the maximum value occurs for

This can guide where we set our design probability of detection. With our case of $r=2$ for foliage poke through, we want a design $Pdet$ of 0.25, or 1 photon received from the target with a 25% PDE, not much different than our case without foliage. We note that the expression for $P(o|n\xaf)max$ is valid for $r\u22650.5$. Traditionally, when designing GMAPD lidars, people design with 0.1 to 0.2 probability of avalanche from the target or 0.4 to 0.8 photons with a PDE of 25%.

To measure grayscale using GMAPD, multiple pulses are transmitted and the grayscale is built up one photodetection at a time. We compute the number of samples that must be transmitted to achieve the required number of photodetections. We use the term “samples,” because we can multiply the number of pulses times the samples per pulse to obtain the number of samples. The required number of photodetections is determined by the need to have the gray level separation large enough so that fluctuation in the number of detections is smaller than the gray level separation.

Since the mean probability of detection $Po$ on any given pulse is less than 1, there will be a fluctuation in the number of detections that will be obtained for a given number of transmitted pulses. As discussed above, the number of detections for a given number of transmitted pulses is a binomial distribution shown in Eq. (16). For a binomial distribution, the mean number of detections out of $N$ pulses is $NPo$. The variance in the number of detections is $N$$Po$ ($1\u2212Po$). To measure $Ng$ number of gray levels ($Ng=2Nbits$), we need $Ng$ separations, each of which is 3.34 times the standard deviation. The factor of 3.34 is so that $\u223c90%$ of the probability distribution is contained within the gray level separation. Hence, we need

As specified earlier, we have assumed a variation in reflectivity from 0.05 to 0.15, or a 10% variation in reflectivity.

Once $Ng$ and $Po$ are specified, $Np$ can be computed from Eq. (19). We can see in Eq. (19) that the required number of pulses is proportional to the square of the desired number of gray levels.

For the small DAS case, we need to map an area of $1024\xd71024\u2009\u2009pixels$. With commercially available GMAPD cameras, we can either use a $32\xd732$ detector array or a $32\xd7128$ detector array. Even with the $32\xd7128$ array, we will need $8\xd732$ steps, or a total of 256 step stares for the small DAS scenario. For the large DAS case, we only need $128\xd7128\u2009\u2009pixels$ with a DAS of 2.5 mrad each, so we could take four steps using the $32\xd7128$ format GMAPD array. If we reduce the DAS to reduce sun blocking loss (instead of decreasing aperture) than we need to increase the number of step stares. To fill the same area while reducing the DAS to $0.5\u2009\u2009mrad\xd70.5\u2009\u2009mrad$ will increase the required number of steps from 4 to 100; for the foliage poke through case with a DAS of $0.25\u2009\u2009mrad\xd70.25\u2009\u2009mrad$, it increases the required number of steps to 400. The foliage poke through case has a larger window in range, so it requires more reduction in DAS to prevent detector blockage by the background from the sun. This small DAS will allow us to use a 1-nm wide filter, whereas with a large DAS, we would need to go to a wider wavelength filter. Also, the smaller DAS will increase angular resolution of the image and will reduce the required number of pulses because we can obtain more samples per pulse.

In Table 6, we show the required number of samples, $Np$, and the required mapping energy for no grayscale and for either 3 bits, 8 gray levels or 6 bits, 64 levels of grayscale for the large DAS case. The number of pulses required for the no grayscale case is determined by how many pulses it takes to have a 90% probability of coincidence between two samples at the same range. In each case, we have chosen this to be one pulse because we are getting 25 or 100 samples from one pulse. We set the $P0$ values to 25 or 100. The number of pulses required for grayscale comes from Eq. (17). $r=0$ is the case for no obscuration, where $r=2$ is our foliage poke through case with two times as much energy being reflected prior to hitting the final target.

Required number of pulses and required 3-D mapping energy for large DAS scenario | ||||||||
---|---|---|---|---|---|---|---|---|

Ratio of near reflected light to target reflected light, $r$ | Probability of detect, including blocking loss, $P(0,n)$ | Probability of detection without blocking loss, $P0$ | Number of gray levels, $Ng$ | Required number of pulses, $Np$ | No. of pulses for 90% probability of two pulse coincidence | No. of samples/pulse | Total mapping energy without gray scale | Total mapping energy with gray scale (mj) |

0 | 0.15 | 0.15 | 8 | 4062 | 1 | 25 | 0.154 | 25.0 |

0 | 0.15 | 0.15 | 64 | 259,959 | 1 | 25 | 0.154 | 1601.4 |

2 | 0.04 | 0.024 | 8 | 29,150 | 1 | 100 | 0.164 | |

2 | 0.04 | 0.024 | 64 | 1,865,591 | 1 | 100 | 0.164 |

Next, we will look at the small DAS scenario. Table 7 shows the required total energy to map the small DAS case using GMAPDs. With grayscale, especially higher levels of grayscale, we see that the required energy is significant.

Required number of pulses and required 3-D mapping energy for small DAS scenario | ||||||||
---|---|---|---|---|---|---|---|---|

Ratio of near reflected light to target reflected light, $r$ | Probability of detect, including blocking loss, $P(0,n)$ | Probability of detection without blocking loss, $P0$ | Number of gray levels, $Ng$ | Required number of pulses, $Np$ | No. of pulses for 90% probability of two pulse coincidence | Total mapping energy per pulse (mj) | Total mapping energy without gray scale (mj) | Total mapping energy with gray scale (mj) |

0 | 0.2 | 0.2 | 8 | 2867 | 9 | 0.99 | 8.9 | 2832.8 |

0 | 0.2 | 0.2 | 64 | 183,501 | 9 | 0.99 | 8.9 | 181298.8 |

2 | 0.25 | 0.15 | 8 | 4062 | 7 | 1.23 | 8.6 | |

2 | 0.25 | 0.15 | 64 | 259,959 | 7 | 1.23 | 8.6 |

The range precision for these scenarios is not a challenge for GMAPDs and could be significantly better. This will be further discussed in the summary section.

InGaAs LMAPDs are manufactured from thin films of $In0.53Ga0.47As$ and either $In0.52Al0.48As$ or InP, epitaxially grown on InP substrates. The principal functional layers include the relatively narrow-bandgap (0.75 eV) InGaAs absorption layer and the relatively wide-bandgap multiplication layer made from either InAlAs (1.46 eV) or InP (1.35 eV), separated by a space charge layer, which ensures that the electric field strength in the absorber remains weak enough to avoid excessive tunnel leakage when the field in the multiplier is strong enough to drive a useful rate of impact ionization. This configuration is called the separate absorption, charge, and multiplication design. The layer ordering of absorber and multiplier relative to the anode and cathode—and the polarity of doping in the charge layer—depends on whether InAlAs or InP are selected as the multiplier material. Holes avalanche more readily in InP than electrons, so in an InP-multiplier APD, the absorber is placed next to the cathode and the charge layer is n-type; vice-versa for an InAlAs-multiplier APD. APD pixels may be formed either by patterned diffusion of the anode into the epitaxial material or by patterned etching of mesas from the thin film (in which case the anode layer was doped during epitaxial growth rather than diffused). Metal contact pads are deposited on individual pixel anodes, whereas a common cathode connection through the substrate is commonly used. In etched mesa designs, the pixel mesa sidewalls are chemically passivated and encapsulated to protect them from environmental degradation. Figure 7 depicts the structure of an InAlAs-multiplier, etched-mesa InGaAs LMAPD pixel of the type used in the detector array for which the calculations are made.

Voxtel presently offers a prototype $128\xd7128$ flash lidar camera with an InGaAs photodiode detector array that is TE cooled, and compatible LMAPD arrays are under development. Among others, advanced scientific concepts (ASC), Inc. Recently acquired by Continental, sells $128\xd7128$ InGaAs LMAPD-based lidar cameras. The InGaAs LMAPD section is based on detector characteristics for Voxtel’s commercial InGaAs APD product, whereas detector characteristics for the HgCdTe LMAPD section are those published by DRS. In both sections, ROIC characteristics typical of two different design nodes—higher BW; higher circuit noise; smaller pixel format, and vice-versa—are used to analyze LMAPD FPA performance.

In general, flash lidar ROICs designed to use linear-mode detectors employ a circuit in each pixel that includes a front end transimpedance amplifier to convert current or charge from the detector into a voltage signal, various filtering or pulse-shaping stages, and voltage sampling, storage, and readout circuitry. Two main sampling architectures are used: synchronous schemes in which the reflected waveform received by each pixel is regularly sampled with a period on the order of nanoseconds or asynchronous schemes in which a comparator is used to trigger sampling of reflected pulse amplitude and time-of-arrival when the signal exceeds an adjustable detection threshold. Provided the signal chain BW is high enough, both the synchronous “waveform recorder” sampling scheme and the asynchronous event-driven sampling scheme can support multihit lidar in which multiple reflections from a single transmitted laser pulse, arriving within nanoseconds of each other, are separately resolved and timed to penetrate obscurants like foliage. In both cases, sampling is active during a range gate in which target returns are expected, samples are stored locally in each pixel during the range gate, and the accumulated waveform or pulse return data is read out from the array in between laser pulses. Higher sample capacity drives ROIC pixel footprint because of the area required for storage capacitors. In general, the event-driven sampling architecture requires less space to implement because fewer samples must be stored to observe a given number of pulse returns per laser shot. The regularly sampled measurement approach has been called a full-waveform lidar for cases in which a large number of samples are stored. The sampling architecture analyzed here is the event-driven, asynchronous type, with an in-pixel storage capacity of up to three range and amplitude sample pairs. This matches the foliage poke through case analyzed here. Generic characteristics typical of this architecture are applied in the analysis.

High BW operation of the signal chain in a flash lidar ROIC pixel generally requires high current draw during the range gate, and the sourcing and distribution of the supply current becomes more challenging as the array format grows. For this reason, we analyze two different configurations: high range precision (higher BW) operation in which pixel current draw limits the active format to about $32\xd732\u2009\u2009pixels$ and operation of a larger ($128\xd7128$) format array with reduced range precision (lower BW). Typical camera frame rates are in the 1 to 10 kHz range but depend on the array format, the number of samples stored and read out per pixel, and the number of output data channels operated in parallel. Aside from differing supply requirements, range precision, format, and frame rate, it should also be noted that operation of the pixel signal chain at different bandwidths will affect absolute sensitivity. Most of the relevant noise sources are wide-band, so, all else being equal, operation of the signal chain with higher BW means more in-band noise and lower sensitivity. However, the signal chain’s BW also affects sensitivity to laser pulses of different shape and duration since the overlap of an input pulse’s frequency spectrum with the ROIC’s transfer function will determine how efficiently the signal is amplified. Here, we will assume that the sensor is responding to 4-ns FWHM pulses in the calculations for the low-BW configuration and to 1-ns FWHM pulses for the high-BW configuration.

As will shortly be established, the high-BW configuration will not be required for the large DAS scenario (which requires 25-cm range precision) since the range precision requirement can be met in a single laser shot using the larger-format low-BW configuration. However, the smaller active format high-range precision configuration may be of use for the small DAS scenario (5-cm range precision). If the low-BW configuration is used in the small DAS scenario, then range measurements from multiple laser shots must be averaged to reduce the standard error of the mean in a range below 5 cm. We will look at whether more laser shots per array step with fewer array steps (low-BW configuration) or fewer shots per array step with more array steps (high-BW configuration) require less energy to develop the required 3-D point cloud for the small DAS scenario. Multiple range measurements will reduce the standard error of the mean by the square root of the number of range measurements, such that the minimum number of range measurements ($NR\u2009min$) of timing standard deviation $\sigma t\u2009ROIC$, which must be averaged to achieve a particular timing precision requirement $\sigma t\u2009required$ is as follows:

The variety of InGaAs LMAPD FPA analyzed here makes pulse return time estimates by sampling an analog voltage ramp that is distributed to all pixels in the array. Sampling of the ramp is triggered when the rising edge of a signal pulse from a detector pixel passes through an adjustable detection threshold. The threshold level must be optimized to extinguish false alarms arising from circuit noise in the ROIC convolved with the multiplied shot noise on the APD pixel’s dark current and background photocurrent. The ROIC’s fundamental timing uncertainty relates both to the voltage noise on the signal that triggers sampling of the ramp (jitter) and to the noise associated with reading the sampled voltage itself (resolution):

Each pulse return at a given optical signal level has some probability, $PD1$, of exceeding the detection threshold. In the large DAS scenario, where the ROIC’s native timing precision is adequate to achieve the range precision requirement of 25 cm, $PD1$ is both the probability of detecting a target surface within a pixel’s instantaneous field-of-view (IFoV) and the probability of ranging to that surface with the required precision. However, in the small DAS scenario with the low-BW configuration, multiple range measurements must be averaged to achieve the range precision requirement of 5 cm. In that case, if $S$ total laser shots are transmitted, the probability of detecting enough pulse returns to achieve a standard error of the mean less than 5 cm is

Approximating the amplitude distribution of the signal into the pixel comparator as Gaussian, $PD1$ can be approximated as

^{18}Table 8 then shows the excess noise factor for a $k=0.2$ InGaAs detector array:

Conceptually, $noiseROIC+dark+background$ is three separate noise terms added in quadrature—a purely circuit-related noise term and the multiplied shot noise of the APD’s dark current and CW background photocurrent. According to Table 3, a background photon arrival rate per pixel of up to $0.0931\u2009\u2009photon/ns$ per nm of filter BW is possible in the worst case (near target scenario; 2.5 mrad DAS; 25 mm aperture). Figure 4 estimates that the best filter width we can have in this case is 9.3 nm, so the background flux in the large DAS case will be $\u223c0.87\u2009\u2009photons/ns$. For a $k=0.2$, InGaAs/InAlAs APD pixel with 80% QE and 70% fill factor, operated at a mean gain of $M=20$, the worst case background photocurrent is about 1.6 nA. This is about an order of magnitude larger than the APD pixel’s 0°C dark current at this gain, of about 0.2 nA. Filter width is not a problem in the far target scenario, with 0.1 mrad DAS. Table 3 gives a background photon rate of about $0.0024\u2009\u2009photon/ns$, corresponding to about 4 pA of photocurrent, which is negligible compared to the pixel dark current. The worst case optical background combined with the APD pixel’s 0°C, $M=20$ dark current together contribute about $71\u2009\u2009e\u2212$ RMS of multiplied shot noise at the ROIC pixel input, whereas with negligible optical background, the multiplied shot noise of the APD pixel’s dark current alone is about $24\u2009\u2009e\u2212$ under these conditions. In the low-BW configuration, responding to 4-ns FWHM laser pulses, an input-referred pixel circuit noise of about $30\u2009\u2009e\u2212$ can reasonably be achieved. In the high-BW configuration, responding to 1-ns FWHM laser pulses, the ROIC’s circuit noise would roughly double. Consequently, in the low-BW configuration, the difference between the worst case solar background and negligible background is $noiseROIC+dark+background\u224877\u2009\u2009\u2009e\u2212$ RMS versus $38\u2009\u2009e\u2212$. The high-BW configuration would not be applied to the large DAS case because of its $16\xd7$ smaller format and the relaxed range precision requirement of that scenario; in the small DAS case, the optical background is negligible, and $noiseROIC+dark+background\u224865\u2009\u2009e\u2212$ RMS for the high-BW configuration. It should also be remarked that if the APD pixel is operated at lower gain, such as $M=10$, the detector shot noise is smaller. We make calculations for APD pixel gains of $M=5$, $M=10$, $M=15$, and $M=20$ to find an optimal operating point.

The arming probability $Pready$ appearing in Eq. (23) depends on when in the range gate the target surface is located ($ttarget$), the pixel’s false alarm rate (FAR) at that detection threshold setting, and the sample capacity of the pixel ($C$). Since false alarms in an LMAPD receiver circuit are independent stochastic events whose average rate of occurrence is given by the FAR, Poisson statistics apply, and the probability that at least one unused sample storage location is available at the time the return from the target surface is received is

Like $PD1$, the FAR depends on the detection threshold ($nth$), but the standard Gaussian approximation for the noise distribution does not accurately model APD noise. Instead, the McIntyre-distributed^{19} noise of the APD must be explicitly convolved with the Gaussian-distributed noise of the ROIC to find the amplitude distribution of noise pulses into the pixel comparator:

^{20}the FAR is found from a prefactor that depends on the pixel signal chain’s BW, $noiseROIC+dark+background$, and the value of the convolution at the comparator threshold ($nth$):

In addition to influencing the arming probability $Pready$, the FAR also determines the probability of a false positive. In the large DAS case for which a single range measurement is required to achieve the specified range precision, Poisson statistics give the probability of at least one false positive occurring within the range gate of a given pixel as follows:

In the small DAS case, where multiple pulse returns must be averaged to reduce the standard error of the mean range measurement, the coincidence of returns from the same range can be used to reject false alarms. If a validation rule of the form “$Nvalid$ returns within $\xb1terror$ of a given time-of-arrival” is applied, the probability of at least one false positive consisting of at least $Nvalid$ time-coincident false alarms occurring anywhere within the range gate, over $S$ total laser shots, is

This is similar to the calculations we used for GMAPDs with a low probability of detection on a single pulse. To summarize, the probability of successfully measuring range to the required precision depends on the number of laser shots transmitted ($S$), the number of range measurements required to achieve that precision ($NR\u2009min$), and the per-shot detection probability ($PD1$). The number of range measurements required depends on the signal strength ($nsignal$), as does the per-shot detection probability. $PD1$ also depends on the probability that the ROIC pixel’s sample capacity has not filled up with false alarms by the time a valid target return arrives, on the detection threshold ($nth$), and on the total noise ($noisetotal$). The total noise includes a component that depends on the signal strength and a component that is present in the absence of the signal ($noiseROIC+dark+background$). The analysis is completed by calculation of the FAR, which depends on $noiseROIC+dark+background$ and $nth$. For a given value of $nsignal$ and $S$, $nth$ can be varied to maximize $PD1$. The maximum value of $PD1$ is then compared to the critical value of $PD1$ required to achieve a particular probability of measuring range to the required precision (e.g., 90%), and $nsignal$ is adjusted until the critical value is just barely reached. This determines the required signal strength at the ROIC pixel input. To translate $nsignal$ into photons per pixel at the FPA (i.e., after collection by the camera aperture and any losses in the optical train), one divides by the product of the mean APD gain (e.g., $M=20$), the APD’s QE (80%), and the fill factor of the detector pixel (e.g., 70%).

The radiometric model described in an earlier section is then used to backcalculate the transmitted laser pulse energy required to achieve the necessary signal level at the FPA under different scenarios (bare earth, foliage poke through, grayscale, etc.). Although multiple laser shots can be used for foliage poke through, as with a GMAPD, the very short (nanosecond) reset time of LMAPD pixels enables multihit lidar with a single laser shot if the ROIC can store multiple pulse returns.

Figure 10 is a plot of the probability of achieving 25-cm range precision using an $M=20$, $k=0.2$, $QE=80%$, $fill factor=70%$ LMAPD detector array operated at 0°C with the low-BW ROIC configuration. The optical background for the 2.5-mrad DAS (worst case) scenario was used. Curves corresponding to $C=3$ (single hit), $C=2$ (two-hit, single-shot foliage poke through) and $C=1$ (three-hit, single-shot foliage poke through) are plotted. The minimum signal level for which there is a 90% chance of ranging to 25-cm precision, against bare earth ($C=3$), is 62 photons when the APD pixel operates at $M=5$, 53 photons for $M=10$, 56 photons for $M=15$, and 61 photons for $M=20$. The optimal gain is lower than the maximum gain in this scenario because of the strong background.

Figure 11 is a plot of the probability of achieving 5-cm range precision in $S=7$ laser shots using an $M=20$, $k=0.2$, $QE=80%$ LMAPD detector array operated at 0°C with 70% optical coupling efficiency in combination with the low-BW configuration ROIC. Curves corresponding to $C=3$ (single hit; blue), $C=2$ (two-hit, single-shot foliage poke through; green) and $C=1$ (three-hit, single-shot foliage poke through; red) are plotted. The steps in the curves occur at signal levels, where the minimum number of range measurements that must be averaged to achieve the specified range precision, $NR\u2009min$, changes by an integer, as calculated in Eq. (19). For example, the probability of detecting seven out of seven laser shots at a signal level of 39 photons is much lower than the probability of detecting six out of seven laser shots at a signal level of 40 photons, mainly because the number of required detections drops by 1 (as opposed to the marginally stronger signal return). That is why all three curves drop discontinuously between 39 and 40 photons.

Figure 12 is a plot of the probability of achieving 5-cm range precision in a single laser shot using an $M=20$, $k=0.2$, $QE=80%$ LMAPD detector array with 70% optical coupling efficiency in combination with the high-BW configuration ROIC. Curves corresponding to $C=3$ (single hit), $C=2$ (two-hit, single-shot foliage poke through), and $C=1$ (three-hit, single-shot foliage poke through) are plotted. The $16\xd7$ difference in coverage between the high- and low-BW ROIC configurations should be considered when comparing this result to the low BW calculation of Fig. 9.

The number of laser shots and average signal return levels per shot required to have a 90% probability of ranging to the precisions specified for the near and far target scenarios are summarized in Table 9.

Near target; large DAS | Far target; small DAS | |||
---|---|---|---|---|

Bare earth | Foliage penetration | Bare earth | Foliage penetration | |

$128\xd7128$; low BW | 1 shot | 1 shot | 4 shot | 3 shot |

$53\u2009\u2009photons/shot$ | $125\u2009\u2009photons/shot$ | $61\u2009\u2009photons/shot$ | $144\u2009\u2009photons/shot$ | |

$32\xd732$; high BW | 1 shot | 1 shot | ||

$46\u2009\u2009photons/shot$ | $110\u2009\u2009photons/shot$ |

The values in Table 9 are the required photons at the focal plane per laser shot and the number of laser shots, per pixel, per stepping of the FPA’s FoV across the scene. The figures given for foliage poke-through include the factor of $1.6\xd7$ reduction in cross-section for returns from the furthest obscured target surface and account for the higher detection threshold setting needed for multihit-per-shot lidar. In both the low BW, large DAS case and the high BW, small DAS case, a single laser shot is needed to achieve the specified range precision against bare earth and with foliage poke through. In the low BW, small DAS case, the least total energy is required when four laser shots are used against bare earth and three for foliage poke through. When total energy is calculated, the number of times the FPA’s FoV must be stepped to cover the scene will also be taken into account. Both high BW and low BW configurations are listed in Table 9, but, in the summary table of required energy for mapping, we only present data for the configuration that requires least total energy.

ROICs of this architecture are also capable of grayscale range imaging if they are set up to sample and store the pulse return amplitude at the same time that they sample the analog time stamp. In passive imaging systems, the least-significant bit (LSB) of a sensor’s dynamic range is normally mapped to its noise floor, such that 6 bits of grayscale imaging would span the range from $1\xd7$ to $64\xd7$ the noise-equivalent input level. Passive imaging also assumes natural scene illumination. However, because the flash lidar architecture considered here uses an event-driven amplitude sampling scheme, pulse return amplitudes weaker than the comparator threshold will not be sampled. Furthermore, the ROIC’s amplifier chain is usually AC coupled, so natural continuous-wave (CW) scene illumination will not trigger sampling except through its contribution to the FAR. Grayscale imaging with such a ROIC is active imaging of the reflected laser pulse intensity. As such, the granularity of the grayscale image is still the noise-equivalent input level of the sensor, but the dynamic range spanned is offset from zero by the detection threshold. By the same token, the dynamic range available for grayscale imaging is smaller than the dynamic range of the signal chain into the threshold comparator.

The grayscale resolution of a conventional passive imager is often expressed as a dynamic range in bits, which is calculated from the camera’s analog dynamic range by equating the LSB to the camera’s noise floor. However, optical signal shot noise increases as the square root of signal level, so an LSB, which represents the noise at zero signal (i.e., in the dark), does not quantify the accuracy with which nonzero signal amplitude can be measured, nor is it possible to define an LSB of a fixed size that exactly expresses signal amplitude measurement accuracy for all signal levels within an imager’s dynamic range. By contrast, this paper quantifies grayscale resolution based on there being a 90% probability that any given signal return amplitude measurement lies within a set interval centered on the average return level corresponding to the true target reflectance. The signal interval for which the calculation is made is that spanned by a reflectance bin of specified width.

Equation (11) for the mean signal return level in photons per pixel can be rewritten as $N=C(R)\xd7\rho $, where $C(R)$ is a range-dependent function containing the radiometric aspects of the problem and $\rho $ is the target reflectance, which runs between $\rho low=5%$ and $\rho high=15%$. If the range spanned by the target reflectance is divided into $Nbits$ range bins, the reflectance bin width is $\Delta \rho =(\rho high\u2212\rho low)/2N\u2009bits$. The mean signal range spanned by a reflectance bin is therefore