Special Section on Active Electro-Optical Sensing: Phenomenology, Technology, and Applications

Comparison of flash lidar detector options

[+] Author Affiliations
Paul F. McManamon

Exciting Technology LLC, Dayton, Ohio, United States

Paul Banks

TetraVue, San Marcos, California, United States

Jeffrey Beck

DRS Network & Imaging Systems, LLC, Dallas, Texas, United States

Dale G. Fried

3DEO, Inc., Dover, Massachusetts, United States

Andrew S. Huntington

Voxtel Inc., Beaverton, Oregon, United States

Edward A. Watson

Vista Applied Optics, LLC, Dayton, Ohio, United States

Opt. Eng. 56(3), 031223 (Mar 07, 2017). doi:10.1117/1.OE.56.3.031223
History: Received August 11, 2016; Accepted February 15, 2017
Text Size: A A A

Open Access Open Access

Abstract.  Three lidar receiver technologies using the total laser energy required to perform a set of imaging tasks are compared. The tasks are combinations of two collection types (3-D mapping from near and far), two scene types (foliated and unobscured), and three types of data products (geometry only, geometry plus 3-bit intensity, and geometry plus 6-bit intensity). The receiver technologies are based on Geiger mode avalanche photodiodes (GMAPD), linear mode avalanche photodiodes (LMAPD), and optical time-of-flight lidar, which combine rapid polarization rotation of the image and dual low-bandwidth cameras to generate a 3-D image. We choose scenarios to highlight the strengths and weaknesses of various lidars. We consider HgCdTe and InGaAs variations of LMAPD cameras. The InGaAs GMAPD and the HgCdTe LMAPD cameras required the least energy to 3-D map both scenarios for bare earth, with the GMAPD taking slightly less energy. We comment on the strengths and weaknesses of each receiver technology. Six bits of intensity gray levels requires substantial energy using all camera modalities.

Figures in this Article

A flash imaging lidar is a laser-based 3-D imaging system in which a large area is illuminated by each laser pulse and a focal plane array (FPA) is used to simultaneously detect light from thousands of adjacent directions. Mapping and 2-D/3-D imaging are examples of applications for such systems. To make these systems as robust as possible, and to reduce the amount of laser power required, receivers in flash lidar systems typically employ some form of gain. One approach is to provide gain in the incident optical signal (photon gain, one example being fiber amplifiers). Another approach, which is a major subject for this paper, is charge gain inside the detector after photon detection has occurred.

Charge gain processes inside detectors exploit the ability to accelerate charged particles in an applied electric field to amplify the number of charge carriers through energetic collisions. One example is photoemissive detectors in which a primary electron generated by the incident absorbed photon is liberated from the detector photocathode, accelerated through an evacuated space by an applied electric field, and then impacted on a target material, generating additional secondary charge carriers from the primary carrier’s kinetic energy. A second type of detector charge gain process is impact ionization inside an avalanche photodiode (APD) in which the primary photoelectrons do not leave the detector material but undergo ionizing collisions within the semiconductor crystal in a high-electric field region of a reverse-biased diode junction.

We analyze two classes of APDs as lidar detectors: linear mode APDs (LMAPDs) and Geiger mode APDs (GMAPDs). LMAPDs are operated below their breakdown voltage, generating current pulses that are on average proportional to the strength of the optical signal pulse. LMAPDs normally operate continuously and are used with high-gain current or charge amplifiers that develop an output voltage waveform that is proportional to the LMAPD’s photocurrent waveform. By contrast, GMAPDs are armed by biasing them above their breakdown voltage, rendering them sensitive to single primary charge carriers. Absorption of one or several photons triggers avalanche breakdown of the GMAPD junction, generating a strong current pulse that is easily sensed, the amplitude of which is limited by a quenching circuit. Immediately following breakdown, the GMAPD’s quenching circuit momentarily reduces the applied reverse bias below the GMAPD’s breakdown voltage, terminating the avalanche process and allowing trapped carriers to clear the junction before rearming the GMAPD. If the GMAPD is armed to soon after, pulsing will occur, resulting in false signals. Generally speaking, GMAPDs are sensitive to weaker signals than most LMAPDs, but LMAPDs can directly measure signal return amplitude and can resolve optical pulses separated by as little as a nanosecond, depending on laser pulse width and the APD’s linear gain. Certain high-gain LMAPDs, chiefly electron-avalanche HgCdTe APDs, provide enough linear gain to detect single photons without entering avalanche breakdown.

The GMAPDs considered here, and one of the two types of LMAPD, are manufactured with InGaAs light-absorption layers responsive in the short-wavelength infrared (SWIR) and are typically thermoelectrically (TE)-cooled. Single-photon detection efficiency (SPDE) of 25%, dead time of 1  μs following breakdown, and dark count rate (DCR) of about 6 kHz at 225 K are typical of the 25-μm-diameter GMAPD pixels for which calculations are made; although not sensitive at 1550 nm, 128×32-format arrays of 18-μm GMAPD pixels have been reported. These arrays operate with 32.5% SPDE and 5 kHz DCR at 253 K due to the use of a wider bandgap the InGaAsP absorption layer optimized for 1064-nm signal detection.1 Interframe timing jitter of the 1064-nm-sensitive 128×32-format GMAPD array was reported to be about 500 ps, which may have been dominated by clock signal distribution issues in its readout integrated circuit (ROIC) rather than the fundamental timing performance of the GMAPD pixels themselves; timing jitter for 32×32-format arrays of 1550-nm-sensitive pixels was reported to be in the 150- to 200-ps range.1 The 30-μm2 InGaAs LMAPD pixels analyzed typically operate at linear gain M=20 with 0.2-nA dark current at 273 K, quantum efficiency (QE) of 80%, and an excess noise factor (F) parameterized by ionization coefficient ratio k=0.2, resulting in F=5.56 at M=20. Multistage InGaAs LMAPDs that operate at gains approaching M=1000 with excess noise parameterized by k=0.04 have been reported, but they are not a mature technology.2 Low excess noise LMAPDs made from AlInAsSb3 and InAs4 have also been reported, but, among the high-gain LMAPDs, electron-avalanche HgCdTe LMAPDs are the most mature. HgCdTe LMAPDs can be manufactured to respond efficiently from the ultraviolet (UV) to the mid-wavelength infrared (MWIR) and can have high linear gains up to 1000 or more, maintaining an excess noise factor F near 1. The 64-μm HgCdTe LMAPD pixels for which calculations are made can operate at linear gains over M=1000 but are analyzed at M=200, for which the dark current at 100 K is 0.64 pA, QE=65%, and F=1.3. The two disadvantages of HgCdTe LMAPDs are the need to cool HgCdTe to near 100 K and the cost.

We also consider low bandwidth (BW) detectors. These are often used for passive sensors. There are, however, 2-D gated lidar detector arrays, such as the Intevac camera. There are 3-D imagers that use a Pockels cell to obtain timing to measure range using time insensitive 2-D imaging arrays, sometimes called optical time-of-flight (OToF) lidars. Last, there are spatial heterodyne, or more broadly digital holography, based uses for these cameras in active imaging. In this paper, the only 2-D cameras we consider will be used in conjunction with the OToF 3-D imagers.

This paper quantitatively compares these detector modalities, using the metric of total energy required to 3-D map two scenarios, with various assumptions for each scenario. To our knowledge, this is the first quantitative comparison between these detector modalities. The most comprehensive comparison prior to this work was part of the 2014 National Academy of Science Report, Laser Radar: Progress and Opportunities in Active Electro-Optical Sensing, chaired by McManamon et al.5 Prior to that, there were two comparison papers.6,7

To compare lidar receiver technologies, we will define a set of imaging tasks accomplished using direct detection systems. The primary figure of merit will be the amount of laser illumination energy needed to accomplish the imaging task. Each imaging task is defined using one of two possible collection geometries (near or far), one of two possible scene types (partially obscured or not obscured), and one of three possible data product types (geometry only, geometry plus 3-bit target reflectance, and geometry plus 6-bit target reflectance). The detectors differ in terms of the total time and number of laser shots required to perform the imaging tasks; some requiring accumulation of repeat observations over multiple laser shots. These metrics are relevant to imaging dynamic scenes that change spatial configuration over time, but such a comparison is beyond the scope of the present analysis.

Collection Geometry

To compare camera types, we define two direct detection scenarios. The near scenario has a large detector angular subtense (DAS), and the far does not. The large DAS case has more of a background photon issue with sun background during a clear, blue sky day. We can define how much energy is required to 3-D map with a bare earth return and no grayscale, how much it will take to 3-D map with returns from three ranges in a given pixel, and how much energy it will take to 3-D map with grayscale (3 bit or 6 bits). Table 1 specifies the two collections scenarios used in this paper. We envision an aircraft flying at height R above the ground, looking straight down. The receiver aperture can be made smaller if warranted by design trade considerations, but it must not exceed max aperture diameter. The stated range precision must be achieved with a probability of at least 90%.

Table Grahic Jump Location
Table 1Collection geometries.
Scene Types

Operational lidars image objects that are unobscured as well as objects that are partially obscured by foliage, or other surfaces, between the sensor and the object are imaged. Because objects under forest canopy are imaged by only those rays that have a clear line-of-sight (LoS) from the sensor, we use the term “foliage poke through” instead of “foliage penetration.” Light incident upon leaves and branches is absorbed or scattered but does not penetrate. When the holes through the canopy are small compared to the projection of a sensor pixel on the canopy, received light for a given pixel can come from multiple ranges. We adopt a simple model that ignores diffraction effects, the relative motion of the aircraft between pulse transmission and detection, and partial blockage by nonparallel light. We note that if the detector pixel FoV is very small, then the characteristic sizes of the holes through real forest canopy might be larger than the projected size of the pixel at the ground. In that case, each pixel sees a single unobscured layer in the canopy or the ground instead of the multiple layers described here. This condition has implications for OToF and GMAPD lidars. Usually, reflectivity from the foliage canopy will be higher than from the ground or manmade targets. For this paper, we assume that the top two surfaces in a pixel have reflectivity ρc=3ρg, where the ground reflectivity is assumed to be ρg=0.10, but we assume that the cross-section from each range in the pixel is the same. That means each of the two closer reflections contains less pixel area. In a mixed pixel then, each range has a cross-section, σ, of Display Formula

where ρg=0.1 is the reflectivity of the ground, R is range to the target, and DAS is in radians. The illuminated area for one pixel is the DAS squared times the range squared. Without any foliage in the foreground, that area times the reflectivity would be the cross-section. In our case, however, one-fifth of the pixel is blocked by each of the near-range reflections, reducing the cross-section for the foliage poke through case to three-fifth of what it would have been. With the assumptions above, the cross-section for each reflection from the farthest range, when we look at foliage poke through, is 60% of what it would be with a clear LoS pixel. The required energy for foliage poke through will therefore be 1.6 times as large as for the bare earth case.

Data Product Types

In direct detection systems, two types of information are typically recovered. One is the range from the sensor to the target on a pixel by pixel basis, which is often called a 3-D point cloud image. Here, the range information is gathered (often through some form of timing circuitry) as a function of position on the receiver focal plane. Hence, the contrast information that is provided from pixel to pixel is a variation in range. The other type of information that can be gathered is reflectance, e.g., irradiance, often called grayscale. The contrast from pixel to pixel in this case is derived by quantifying the energy deposited on each pixel, which is related to the reflectivity of the surface illuminated by the laser. We are interested in determining the number of photodetections required for each of three types of data products: geometry only (i.e., just a point cloud), geometry plus reflectivity measured with a resolution of Nbits=3  bits, and geometry plus reflectivity measured with a resolution of Nbits=6  bits. Once we have the number of photodetections for each sensing modality, we can use that information and a standard link budget approach to calculate total energy required for each modality.

Grayscale calculations

Our approach for active grayscale measurement using laser illuminator photons is as follows. We divide the distribution into a defined number of reflectivity levels (gray levels). We assume that object reflectivities range between a minimum of ρmin=0.05 and a maximum of ρmax=0.15, as illustrated in Fig. 1. Then, the lidar system is required to discern a reflectivity bin size of =(ρmaxρmin)/2Nbits/ρmax=0.0104 for the 6 bit case or 0.0833 for the 3 bit case (i.e., the reflectivity intervals are eight times wider). We must be able to distinguish between one gray level and another, even in the presence of noise in the lidar receiver. Our lidar measurements are done with enough SNR so that there is a 90% probability of assigning the target reflectivity to the correct bin, Pc=0.9. All measurement modalities are subject to shot noise arising from the fact that the quantization of the received light obeys Poisson statistics. Other sources of instrument noise and distortion will add to this minimum noise level. An example of the reflectance bins and the effects of shot noise is indicated in Fig. 2 for the simple case of Nbits=3 and Pc=80%. The eight grayscale bins are indicated by the vertical black dashed lines. The colors represent eight different mean numbers of events, from 242.6 (dark blue) to 727.9 (dark red). These different mean number of returns could indicate the number of received photons from different reflectivity targets. The solid lines indicate the cumulative distribution function of the results of 2000 random trials. The dashed colored lines indicate the Poisson distribution function for each mean number of events. The shot noise is widest at the highest reflectivity, so it is this limit that sets the minimum required number of received photons.

Graphic Jump Location
Fig. 1
F1 :

Range of surface reflectivity.

Graphic Jump Location
Fig. 2
F2 :

Impact of shot noise on measurement of reflectivity.

While, for this paper, we have only picked conditions that might exemplify an advantage to one detection mode or another, it is interesting to see the effect that different levels of grayscale have on imagery. This can be seen in Fig. 3 for grayscale ranging from 1 to 6 bits.

Graphic Jump Location
Fig. 3
F3 :

The effect of gray scale on a particular image.

Foliage poke through

For LMAPDs, mixed pixels in range will not create measurement issues so long as the detector has enough dynamic range and can record reflections from multiple ranges. For GMAPDs, there is a need to keep the probability of avalanche low on the initial returns, or later range returns will be blocked by the dead time of the GMAPD after an avalanche. For the OToF approach, a mixed pixel provides an average range value, not multiple range values. The OToF approach is, however, likely to have much larger format arrays, so it may have a number of smaller DAS detectors making up one required DAS for our scenarios. Smaller DAS pixels making up one of our larger pixels may have an unobstructed view through the canopy. This same effect could be prevalent when a GMAPD camera uses smaller DAS pixels to mitigate background effects, although the calculations done later in the paper for GMAPD assume mixed pixels rather than single range small pixels.

Common Assumptions

Many system assumptions are common to all of the scenarios analyzed, as shown in Table 2. We will assume a visibility of 23 km, which takes out most of the atmospheric attenuation factor because at 1.55  μm, this results in a β of 0.00011. We will assume an average 10% Lambertion reflectivity, a bright sunny day, and a spectral band-pass filter as narrow as 1 nm. The operating wavelength will be 1550 nm.

Table Grahic Jump Location
Table 2Common assumptions.
Calculation of background from solar flux

The model described by McManamon et al.8 was used as a basis for our treatment of the solar background. In this paper, we assume a variable width filter that is adjusted based on the sensor field-of-view (FoV). The filter width can be as low as 1 nm, but, as the acceptance angle becomes larger, we need to increase the angular acceptance width of the filter.

Spectral filter technology

Commercially available narrow-band filters can be placed in the receiver optical path to block unwanted background light. We assume that the narrowest achievable BW for a reasonable cost is σmin=1  nm for collimated light at normal incidence (for example, Alluxa offers a filter width of 0.7 nm for collimated light at 1064  nm2). As the sensor FoV is increased and rays at larger angles from the optical axis need to be accommodated, the range of effective wave vectors that must be accommodated widens; the wider filter BW passes more scene luminance, introducing more noise. The shift of the resonance wavelength with angle can be modeled as a Fabry–Perot resonator, as given in Eq. (2) and Fig. 4: Display Formula


Graphic Jump Location
Fig. 4
F4 :

Wavelength shift of resonant filter with incidence angle.

Figure 4 indicates the required filter BW for typical material effective index neff=2. The widest sensor FoV occurs when a single array images the entire area; the angular distance to the corner of the array is Display Formula

where DAS is the angular subtense of a pixel and Nx and Ny are the number of pixels in each direction in the FPA. For Nx=Ny=128 and DAS=2.5  mrad (the large DAS case), θc=0.226  rad. Clearly, a wider filter BW, up to 9.3 nm, is needed, which will substantially increase background light. Recent developments suggest new filters that are much more tolerant to angle.

Our comparison of detector technologies requires that the lidar can be operated in full sunlight. Table 3 gives the number of photons from the sun captured in each DAS per nanosecond, using a 1.0-nm wide filter. Background photon rates for wider spectral filters are obtained by linear scaling from this table. Table 4 then provides the number of background photons from the sun for specific cases of interest in this analysis.

Table Grahic Jump Location
Table 3Number of photons captured per nanosecond in each detector for a 1.0-nm wide filter, with 1.55-μm wavelength.
Table Grahic Jump Location
Table 4Number of background photons from the sun for relevant cases.
Link Budget Calculations to Determine the Required Laser Energy, Once the Required Number of Photons per Pixel is Known

For each modality, we use the same link budget equations to determine how much energy per pulse we will need for the scenarios, based on how many photons reach each detector. A 2012 review9 article shows Display Formula

where PR is the received power, PT is the transmitted power, σ is the lidar cross-section, Aillum is the area illuminated, Arec is the area of the receive aperture, ηatm is the transmission through the atmosphere between sensor and target, and ηsys is the optical system transmission efficiency. If we multiply both sides of Eq. (4) by a laser pulse width in units of time, this becomes energy received and transmitted. We can then solve for transmitted energy: Display Formula
where Display Formula

For 23-km visibility, β1 is 23  km at 1550 nm.10 The required received energy, ER, can be specified as the energy in N photons. We assume a wavelength of 1550 nm, for which Display Formula

ER=N×1.281×1019  J.(7)

We assume a system efficiency through the optical train of ηsys=60%. We assume the area illuminated is 1.1 times as large as the angular area covered by our detectors to allow for some illumination inefficiency. This area grows with range. The cross-section is the reflectivity, ρg, times the area seen by a given detector, which also grows with range: Display Formula


This is similar to Eq. (1), but with no foliage poke through, so the whole pixel is viewed. The ratio of area illuminated to cross-section is Display Formula

where NDet is the number of detectors. Based on Eqs. (5–9), we can solve for required transmitter energy per pulse: Display Formula

For each modality, we can then calculate how much energy is required to map the area in each of the scenarios based on that detector’s required value of N. The energy calculated by Eq. (10) is only for the number of pixels covered by a single detector array. For example, in the case of the GMAPD, we use a 32×128 detector array. In our near-range scenario, the 128×128  pixel scene would be covered by stepping the GMAPD array’s FoV four times and the energy computed by Eq. (10) would be multiplied by a factor of 4; in our far-range scenario, we have 1024×1024  pixels, so the energy calculated by Eq. (10) would need to be multiplied by a factor of 256. For the near-range GMAPD case, if we use a smaller DAS to alleviate solar background, the number of required steps would increase commensurately. Multiple flash images of the same area of the scene may be required to collect geometry and/or grayscale data of the precision required by each scenario, depending on detector type. For example, GMAPD cameras are often designed to have low probability of detection per pulse, with the image built up by accumulation of multiple pulses against the target. The number of laser shots required per array step across the scene also multiplies the result of Eq. (10) when computing the total energy required by a given detector for a given range scenario and data product.

Lidar systems using arrays of GMAPDs were first proposed by Marino11,12 and demonstrated by MIT Lincoln Laboratory.13 Development work has continued to advance the technology for Geiger-mode ladar components, systems, data processing, and data exploitation in many research groups. Figure 5 shows a structure for a GMAPD detector.14 Our analysis relies on previous work by Fouche,15 who analyzed signal requirements in the presence of background noise. Recent modeling by Kim et al.16 provides a detailed description of example system behavior. We restrict our analysis to commercially available Geiger mode cameras. We consider commercial framing cameras with up to 186-kHz frame rate for the 32×32 or up to 110 kHz for the 32×128 format. An asynchronous readout 32×32 is also now commercially available. This is capable of even higher readout rates limited only by the dead time between detections. In GMAPDs, the detector is biased above the breakdown voltage, so a photoelectron generated in the absorber region will result in a large avalanche, often resulting in a voltage fluctuation on the order of 1 V. If one photon, or many photons, hit the detector, the same large avalanche occurs. There is a dead time of 400 ns to 1  μs after each triggered event, which can block detection of photons arriving later unless the probability of avalanche is kept low. For the case with foliage poke through, we set the average number of photons per pixel to be 0.8 photons returned for the expected range and reflectivity of the target or a 20% probability of detection per pulse given a PDE of 25%. With GMAPDs, there is crosstalk between detectors that is caused when a photon emitted during breakdown of one pixel triggers breakdown in another pixel. The noise due to crosstalk tends to be concentrated in the range region, where most of the detections occur. Even there, crosstalk noise is much smaller than noise due to background light for the cases analyzed in this paper. GMAPD flash imaging lidars tend to be designed to run at high frame rates, and many samples are used to capture the necessary number of photodetection events to achieve the signal level requirements. Laser pulse energy is lower, the number of photoelectrons generated per pulse is low, and the probability of a pixel firing is low. This has the technical benefit of keeping peak laser intensity low since each pulse is weak while maintaining high average power. This means that when we calculate required energy to 3-D image in a region, the main thing we vary will be the number of pulses, not the energy transmitted per pulse.

Graphic Jump Location
Fig. 5
F5 :

Schematic illustration of a diffused-junction planar-geometry avalanche diode structure. The electric field profiles at right show that the peak field intensity is lower in the peripheral region of the diffused p-n junction than it is in the center of the device.

There are multiple detection events that can trigger a GMAPD receiver: the detection of a desired target photon, the detection of an undesired foreground clutter photon (such as backscatter from foliage), the detection of undesired background radiation (such as the sun), or the undesired detection of a dark electron. Cross talk can also trigger a GMAPD. If we send out many laser pulses, we will get coincident returns (returns in the same range bin) for reflection from a target or from fixed foreground objects, but returns from dark current, background, fog, snow, or rain will provide distributed returns with very low probability of range coincidence.

Effect of a Bright Sun Background on Geiger-Mode Avalanche Photodiodes

One of the first things to address for GMAPDs is if background from the sun will affect either of the two scenarios. We conclude that it will not significantly affect the small DAS case but will significantly affect the large DAS case. Solar background is detrimental in two ways: blocking and noise, with blocking as the more important for this analysis. If the GMAPD undergoes an avalanche before the signal photons arrive, the detector is “blocked” and is unable to detect the signal until after the dead time. On the other hand, noise can cause the system to erroneously declare a surface to be present. Given a background photon rate per pixel of γ taken from Table 4, a PDE of 25%, and a gate width W during which the APD is sensitive, the mean number of photoelectrons generated in the APD by the sun before the signal occurs is Display Formula

Nb=γPDEW(2c),  (11)
where c is the speed of light and the factor of two accounts for the round trip. The Poisson distribution says that the probability of the APD not undergoing breakdown before the signal arrives (i.e., the probability of zero photo-electrons) is Display Formula
where PB is the probability of blocking. To keep the blocking loss below PB=0.2, the number of background photoelectrons must be kept below Display Formula

For a gate width W=100  m, the background must be below γ=1.33  MHz. Clearly, DCRs, which are typically 1 to 10 kHz, can be neglected.

The background photon rate can be limited by introducing attenuation on the receiver, reducing the aperture, or increasing the focal length and therefore reducing the pixel DAS. The GMAPD community prefers to increase the focal length while maintaining the aperture diameter to reduce blocking loss. The disadvantage of decreasing DAS instead of aperture diameter is that we then must scan more locations to develop the FoV required by the scenario. This will probably increase collection time.

In line 2 of Table 5, we take values from line 2 of Table 3. We see in Table 3, line 2, that if we have a 25-mm diameter aperture and a DAS of 2.5 mrad, then we have 0.0931  photons/s. In Table 5, we see for that case the sun completely blocks our detector, showing 0% probability of not having an avalanche. This is the baseline case for our near-range, large DAS, scenario. From line 4 of Table 3, we have a gate width W=50  m and 0.00373  photons/ns. In that case, we will have a 70% probability of not being blocked if we either reduce our aperture diameter from 25 to 5 mm while keeping the DAS at 2.5 mrad or if we reduce the DAS to 0.5 mrad while maintaining a 25-mm diameter receive aperture. In either case, the result is the same in terms of sun blockage. For the gate width W=100  m, we can either reduce the aperture size to 2.5 mm in diameter or the DAS to 0.25 mrad to avoid sun blocking. The smaller DAS case can use a narrower filter width, so that is one reason it will result in lower energy than decreasing the receive aperture size, and, of course, it provides higher resolution. Innovative processing will also provide a significant advantage for reducing the DAS compared to reducing the receive aperture diameter. In the next section, we will talk about coincidence processing, which is used by GMAPDs to achieve the required 90% probability of detection. If we reduce the DAS by a factor of 5 in each dimension, then each of our 0.5×0.5  mpixels is made up of 25 0.1  m×0.1  mpixels. For surfaces that are smoothly varying, we can use these 25 samples to do coincidence processing, resulting in the need for as much as 25× fewer pulses. This will reduce required energy for mapping the area.

Table Grahic Jump Location
Table 5Probability of avalanche from background sun photons.
Coincidence Processing for Detection

When using GMAPDs in a foliage poke through scenario, we keep the probability of detection from a single pulse low because of the dead time after an avalanche (e.g., Pdet=0.2). This preserves our ability to see objects farther in range then the initial return. Sometimes people use even lower probability of detection, such as 0.1. If we do not have mixed pixels with multiple range returns, we can allow the probability of detection to increase. For GMAPDs, we want to determine the number of pulses, Np, that must be transmitted to cause a GMAPD pixel to fire on M pulses scattered from the surface of interest (we anticipate that M will be a minimum of two or three detections from the surface of interest). This coincidence detection will determine a real return from a physical object, as compared to a random false return. We rely on the fact the noise is randomly distributed in time, whereas returns from real objects only occur at the range of an object. We ignore nonuniform detector illumination and sensitivity.

The probability Po of detecting a photon backscattered from the object of interest can be expressed as a conditional probability:17Display Formula

where P(o|n¯) is the probability of detecting a photon backscattered by the object given that a photon scattered by some intervening obscurant (or dark count) has not been detected and P(n¯) is the probability of not detecting a photon from an intervening object. Since detecting a photon from an intervening obscurant is a binary event (it either is detected or it is not detected), the probability of not detecting a photon from an intervening obscurant is just one minus the probability of detecting that photon. Hence, Eq. (14) can be written as Display Formula
where Pn is the probability of detecting a photon from an intervening obscurant or a dark count. As indicated earlier, the laser radar parameters can be adjusted so that the average values for P(o|n¯) and Pn on a single pulse will be much less than 1. Multiple pulses will then be required to achieve high probabilities of detection. If the probability of detection is increased for the case with low reflectivity in the foreground, then fewer pulses will be required but more laser energy will be required per pulse. The probability Po of detecting a photon backscattered from the object of interest is also a binary event. Therefore, the probability of detecting a specified number of photons, M, backscattered from the object of interest, out of N pulses, can be described by the binomial distribution as follows:11Display Formula
P(Mdetections out ofNppulses)=Np!M!(NpM)![Po]M×{1Po}NpM.(16)

The value for Po can be calculated once the parameters of the lidar system are specified. However, some insight can be obtained without considering a specific system configuration. To do this, we recast Eq. (15) in the following manner: Display Formula

where r is the ratio Pn/P(o|n¯), which is the ratio of the strength of scattering from the intervening obscuration and the strength of the return from the target of interest. In this form, a value for P(o|n) can be specified (controllable through the parameters of the laser radar imaging system) and the relative strength of backscatter between the intervening obscurants and the object can be treated as a parameter. As can be seen from Fig. 6, with a design probability per pulse of 0.2, nine pulses will provide two pulse coincidence with a 90% probability, and 14 pulses per detector should provide 90% probability of coincidence detection with three pulses. If we used an individual probability of 0.1, then this would be 38 and 52 pulses, respectively, and, with a probability of 0.15, we would need 25 pulses for two coincidences and 34 pulses for three coincidences. For a 0.04 probability, we would need 96 and 132 pulses. Obviously, more coincident pulses provide a higher probability that we are seeing a return from a hard target. Each avalanche resulting from scatter at a certain range will occur at a random location within the pulse width. One interesting result is the case of our large DAS scenario; we will reduce the DAS in both directions by a factor of 5 for bare earth and a factor of 10 for foliage poke through, resulting in 25 and 100 times as many samples. If we aggregate 25 small detects into one 0.5×0.5  m ground sample distance, GSD, then we can use a single pulse to obtain 15% probability of two pulse coincidence, and, in the case of aggregating 100 small area detections, we can go down to 4% probability of detection as a design criteria and still use only a single pulse.

Graphic Jump Location
Fig. 6
F6 :

Probability of M out of N detections.

Since we desire maximizing the number of detections from the object of interest rather than the obscurations/false counts, we desire maximizing the value of Po given the constraint that rP(o|n¯)<1, where r is the ratio of near-range reflected light to target reflected light. As a reminder, Po is the probability of detecting a photon from the object of interest, whereas P(o|n) is the probability of detecting a photon from the object of interest with no obscuration. For our foliage poke through example, we have twice as much near-range reflections as we do target reflections, with the last return considered the target. In that case, r=2. Two-thirds of the return flux come from the foreground surfaces and one-third from the final surface. We maximize the probability of detecting a photon from the object of interest by differentiating Eq. (17) with respect to P(o|n¯) and setting the derivative equal to zero. We find that the maximum value occurs for Display Formula


This can guide where we set our design probability of detection. With our case of r=2 for foliage poke through, we want a design Pdet of 0.25, or 1 photon received from the target with a 25% PDE, not much different than our case without foliage. We note that the expression for P(o|n¯)max is valid for r0.5. Traditionally, when designing GMAPD lidars, people design with 0.1 to 0.2 probability of avalanche from the target or 0.4 to 0.8 photons with a PDE of 25%.

To measure grayscale using GMAPD, multiple pulses are transmitted and the grayscale is built up one photodetection at a time. We compute the number of samples that must be transmitted to achieve the required number of photodetections. We use the term “samples,” because we can multiply the number of pulses times the samples per pulse to obtain the number of samples. The required number of photodetections is determined by the need to have the gray level separation large enough so that fluctuation in the number of detections is smaller than the gray level separation.

Since the mean probability of detection Po on any given pulse is less than 1, there will be a fluctuation in the number of detections that will be obtained for a given number of transmitted pulses. As discussed above, the number of detections for a given number of transmitted pulses is a binomial distribution shown in Eq. (16). For a binomial distribution, the mean number of detections out of N pulses is NPo. The variance in the number of detections is NPo (1Po). To measure Ng number of gray levels (Ng=2Nbits), we need Ng separations, each of which is 3.34 times the standard deviation. The factor of 3.34 is so that 90% of the probability distribution is contained within the gray level separation. Hence, we need Display Formula


As specified earlier, we have assumed a variation in reflectivity from 0.05 to 0.15, or a 10% variation in reflectivity.

Once Ng and Po are specified, Np can be computed from Eq. (19). We can see in Eq. (19) that the required number of pulses is proportional to the square of the desired number of gray levels.

For the small DAS case, we need to map an area of 1024×1024  pixels. With commercially available GMAPD cameras, we can either use a 32×32 detector array or a 32×128 detector array. Even with the 32×128 array, we will need 8×32 steps, or a total of 256 step stares for the small DAS scenario. For the large DAS case, we only need 128×128  pixels with a DAS of 2.5 mrad each, so we could take four steps using the 32×128 format GMAPD array. If we reduce the DAS to reduce sun blocking loss (instead of decreasing aperture) than we need to increase the number of step stares. To fill the same area while reducing the DAS to 0.5  mrad×0.5  mrad will increase the required number of steps from 4 to 100; for the foliage poke through case with a DAS of 0.25  mrad×0.25  mrad, it increases the required number of steps to 400. The foliage poke through case has a larger window in range, so it requires more reduction in DAS to prevent detector blockage by the background from the sun. This small DAS will allow us to use a 1-nm wide filter, whereas with a large DAS, we would need to go to a wider wavelength filter. Also, the smaller DAS will increase angular resolution of the image and will reduce the required number of pulses because we can obtain more samples per pulse.

In Table 6, we show the required number of samples, Np, and the required mapping energy for no grayscale and for either 3 bits, 8 gray levels or 6 bits, 64 levels of grayscale for the large DAS case. The number of pulses required for the no grayscale case is determined by how many pulses it takes to have a 90% probability of coincidence between two samples at the same range. In each case, we have chosen this to be one pulse because we are getting 25 or 100 samples from one pulse. We set the P0 values to 25 or 100. The number of pulses required for grayscale comes from Eq. (17). r=0 is the case for no obscuration, where r=2 is our foliage poke through case with two times as much energy being reflected prior to hitting the final target.

Table Grahic Jump Location
Table 6Required number of pulses and energy required for the large DAS case.

Next, we will look at the small DAS scenario. Table 7 shows the required total energy to map the small DAS case using GMAPDs. With grayscale, especially higher levels of grayscale, we see that the required energy is significant.

Table Grahic Jump Location
Table 7Small DAS number of pulses and energy required.

The range precision for these scenarios is not a challenge for GMAPDs and could be significantly better. This will be further discussed in the summary section.

Calculations for InGaAs Linear Mode Avalanche Photodiode Cameras

InGaAs LMAPDs are manufactured from thin films of In0.53Ga0.47As and either In0.52Al0.48As or InP, epitaxially grown on InP substrates. The principal functional layers include the relatively narrow-bandgap (0.75 eV) InGaAs absorption layer and the relatively wide-bandgap multiplication layer made from either InAlAs (1.46 eV) or InP (1.35 eV), separated by a space charge layer, which ensures that the electric field strength in the absorber remains weak enough to avoid excessive tunnel leakage when the field in the multiplier is strong enough to drive a useful rate of impact ionization. This configuration is called the separate absorption, charge, and multiplication design. The layer ordering of absorber and multiplier relative to the anode and cathode—and the polarity of doping in the charge layer—depends on whether InAlAs or InP are selected as the multiplier material. Holes avalanche more readily in InP than electrons, so in an InP-multiplier APD, the absorber is placed next to the cathode and the charge layer is n-type; vice-versa for an InAlAs-multiplier APD. APD pixels may be formed either by patterned diffusion of the anode into the epitaxial material or by patterned etching of mesas from the thin film (in which case the anode layer was doped during epitaxial growth rather than diffused). Metal contact pads are deposited on individual pixel anodes, whereas a common cathode connection through the substrate is commonly used. In etched mesa designs, the pixel mesa sidewalls are chemically passivated and encapsulated to protect them from environmental degradation. Figure 7 depicts the structure of an InAlAs-multiplier, etched-mesa InGaAs LMAPD pixel of the type used in the detector array for which the calculations are made.

Graphic Jump Location
Fig. 7
F7 :

InGaAs LMAPD structure.

Voxtel presently offers a prototype 128×128 flash lidar camera with an InGaAs photodiode detector array that is TE cooled, and compatible LMAPD arrays are under development. Among others, advanced scientific concepts (ASC), Inc. Recently acquired by Continental, sells 128×128 InGaAs LMAPD-based lidar cameras. The InGaAs LMAPD section is based on detector characteristics for Voxtel’s commercial InGaAs APD product, whereas detector characteristics for the HgCdTe LMAPD section are those published by DRS. In both sections, ROIC characteristics typical of two different design nodes—higher BW; higher circuit noise; smaller pixel format, and vice-versa—are used to analyze LMAPD FPA performance.

In general, flash lidar ROICs designed to use linear-mode detectors employ a circuit in each pixel that includes a front end transimpedance amplifier to convert current or charge from the detector into a voltage signal, various filtering or pulse-shaping stages, and voltage sampling, storage, and readout circuitry. Two main sampling architectures are used: synchronous schemes in which the reflected waveform received by each pixel is regularly sampled with a period on the order of nanoseconds or asynchronous schemes in which a comparator is used to trigger sampling of reflected pulse amplitude and time-of-arrival when the signal exceeds an adjustable detection threshold. Provided the signal chain BW is high enough, both the synchronous “waveform recorder” sampling scheme and the asynchronous event-driven sampling scheme can support multihit lidar in which multiple reflections from a single transmitted laser pulse, arriving within nanoseconds of each other, are separately resolved and timed to penetrate obscurants like foliage. In both cases, sampling is active during a range gate in which target returns are expected, samples are stored locally in each pixel during the range gate, and the accumulated waveform or pulse return data is read out from the array in between laser pulses. Higher sample capacity drives ROIC pixel footprint because of the area required for storage capacitors. In general, the event-driven sampling architecture requires less space to implement because fewer samples must be stored to observe a given number of pulse returns per laser shot. The regularly sampled measurement approach has been called a full-waveform lidar for cases in which a large number of samples are stored. The sampling architecture analyzed here is the event-driven, asynchronous type, with an in-pixel storage capacity of up to three range and amplitude sample pairs. This matches the foliage poke through case analyzed here. Generic characteristics typical of this architecture are applied in the analysis.

High BW operation of the signal chain in a flash lidar ROIC pixel generally requires high current draw during the range gate, and the sourcing and distribution of the supply current becomes more challenging as the array format grows. For this reason, we analyze two different configurations: high range precision (higher BW) operation in which pixel current draw limits the active format to about 32×32  pixels and operation of a larger (128×128) format array with reduced range precision (lower BW). Typical camera frame rates are in the 1 to 10 kHz range but depend on the array format, the number of samples stored and read out per pixel, and the number of output data channels operated in parallel. Aside from differing supply requirements, range precision, format, and frame rate, it should also be noted that operation of the pixel signal chain at different bandwidths will affect absolute sensitivity. Most of the relevant noise sources are wide-band, so, all else being equal, operation of the signal chain with higher BW means more in-band noise and lower sensitivity. However, the signal chain’s BW also affects sensitivity to laser pulses of different shape and duration since the overlap of an input pulse’s frequency spectrum with the ROIC’s transfer function will determine how efficiently the signal is amplified. Here, we will assume that the sensor is responding to 4-ns FWHM pulses in the calculations for the low-BW configuration and to 1-ns FWHM pulses for the high-BW configuration.

As will shortly be established, the high-BW configuration will not be required for the large DAS scenario (which requires 25-cm range precision) since the range precision requirement can be met in a single laser shot using the larger-format low-BW configuration. However, the smaller active format high-range precision configuration may be of use for the small DAS scenario (5-cm range precision). If the low-BW configuration is used in the small DAS scenario, then range measurements from multiple laser shots must be averaged to reduce the standard error of the mean in a range below 5 cm. We will look at whether more laser shots per array step with fewer array steps (low-BW configuration) or fewer shots per array step with more array steps (high-BW configuration) require less energy to develop the required 3-D point cloud for the small DAS scenario. Multiple range measurements will reduce the standard error of the mean by the square root of the number of range measurements, such that the minimum number of range measurements (NRmin) of timing standard deviation σtROIC, which must be averaged to achieve a particular timing precision requirement σtrequired is as follows: Display Formula


The variety of InGaAs LMAPD FPA analyzed here makes pulse return time estimates by sampling an analog voltage ramp that is distributed to all pixels in the array. Sampling of the ramp is triggered when the rising edge of a signal pulse from a detector pixel passes through an adjustable detection threshold. The threshold level must be optimized to extinguish false alarms arising from circuit noise in the ROIC convolved with the multiplied shot noise on the APD pixel’s dark current and background photocurrent. The ROIC’s fundamental timing uncertainty relates both to the voltage noise on the signal that triggers sampling of the ramp (jitter) and to the noise associated with reading the sampled voltage itself (resolution): Display Formula

where nreference is a signal level at the ROIC pixel input in units of electrons, for which the jitter (σtref) is known, nsignal is the mean signal level for which the jitter is to be estimated, Vnoise is the read noise on the analog time stamp, VDR is the dynamic range of the time stamp, and Δtgate is the range gate duration. Stronger signals transition through the comparator threshold faster, reducing jitter, and a faster ramp rate maps a given magnitude of read noise to a finer temporal resolution, giving rise to the timing precision characteristics calculated in Fig. 8 for typical ROIC timing characteristics. Figure 8 suggests that the low BW camera configuration will require multiple pulses to range with 5-cm precision. With a 500-ns armed period and a very weak signal return (100 e− after avalanche multiplication), the range precision is only about 24 cm. To obtain a standard error of the mean range measurement equal to 5 cm, we would need 25 pulses at this weak signal level. However, only four shots are required for 800  e signal returns. Even for low-BW operation, for the large DAS case, we would only need one pulse of 100  e to obtain 25-cm range precision. Note that the 500-ns range gate is similar to the armed times used for the GMAPDs (333 to 667 ns for 50 to 100 m).

Graphic Jump Location
Fig. 8
F8 :

Timing precision versus armed range gate for the Voxtel camera.

Each pulse return at a given optical signal level has some probability, PD1, of exceeding the detection threshold. In the large DAS scenario, where the ROIC’s native timing precision is adequate to achieve the range precision requirement of 25 cm, PD1 is both the probability of detecting a target surface within a pixel’s instantaneous field-of-view (IFoV) and the probability of ranging to that surface with the required precision. However, in the small DAS scenario with the low-BW configuration, multiple range measurements must be averaged to achieve the range precision requirement of 5 cm. In that case, if S total laser shots are transmitted, the probability of detecting enough pulse returns to achieve a standard error of the mean less than 5 cm is Display Formula

where Nsuccess is the number of laser shots successfully detected. It should be pointed out that Eq. (22) gives the probability of detecting enough pulse returns to achieve a particular standard error of the mean range, but, in general, seeing the target surface with less range precision is an easier problem requiring fewer laser shots and/or a weaker signal.

Approximating the amplitude distribution of the signal into the pixel comparator as Gaussian, PD1 can be approximated as Display Formula

where Pready is the probability that the sensor pixel is able to record a pulse return at the time it arrives, nth is the comparator threshold in units of electrons, and noisetotal is the standard deviation of the signal into the comparator, also in units of electrons; like nsignal, nth and noisetotal are quantities referred to the ROIC pixel input. The total noise is Display Formula
where noiseROIC+dark+background is the standard deviation of the signal into the comparator in the absence of an optical signal return, M is the mean gain of the APD pixel, and F is the APD pixel’s excess noise factor. The excess noise factor for this type of APD (but not the F1.3 HgCdTe LMAPDs described in the next section) obeys McIntyre’s formula [Eq. (25)].18Table 8 then shows the excess noise factor for a k=0.2 InGaAs detector array: Display Formula

Table Grahic Jump Location
Table 8Excess noise factor.

Conceptually, noiseROIC+dark+background is three separate noise terms added in quadrature—a purely circuit-related noise term and the multiplied shot noise of the APD’s dark current and CW background photocurrent. According to Table 3, a background photon arrival rate per pixel of up to 0.0931  photon/ns per nm of filter BW is possible in the worst case (near target scenario; 2.5 mrad DAS; 25 mm aperture). Figure 4 estimates that the best filter width we can have in this case is 9.3 nm, so the background flux in the large DAS case will be 0.87  photons/ns. For a k=0.2, InGaAs/InAlAs APD pixel with 80% QE and 70% fill factor, operated at a mean gain of M=20, the worst case background photocurrent is about 1.6 nA. This is about an order of magnitude larger than the APD pixel’s 0°C dark current at this gain, of about 0.2 nA. Filter width is not a problem in the far target scenario, with 0.1 mrad DAS. Table 3 gives a background photon rate of about 0.0024  photon/ns, corresponding to about 4 pA of photocurrent, which is negligible compared to the pixel dark current. The worst case optical background combined with the APD pixel’s 0°C, M=20 dark current together contribute about 71  e RMS of multiplied shot noise at the ROIC pixel input, whereas with negligible optical background, the multiplied shot noise of the APD pixel’s dark current alone is about 24  e under these conditions. In the low-BW configuration, responding to 4-ns FWHM laser pulses, an input-referred pixel circuit noise of about 30  e can reasonably be achieved. In the high-BW configuration, responding to 1-ns FWHM laser pulses, the ROIC’s circuit noise would roughly double. Consequently, in the low-BW configuration, the difference between the worst case solar background and negligible background is noiseROIC+dark+background77  e RMS versus 38  e. The high-BW configuration would not be applied to the large DAS case because of its 16× smaller format and the relaxed range precision requirement of that scenario; in the small DAS case, the optical background is negligible, and noiseROIC+dark+background65  e RMS for the high-BW configuration. It should also be remarked that if the APD pixel is operated at lower gain, such as M=10, the detector shot noise is smaller. We make calculations for APD pixel gains of M=5, M=10, M=15, and M=20 to find an optimal operating point.

The arming probability Pready appearing in Eq. (23) depends on when in the range gate the target surface is located (ttarget), the pixel’s false alarm rate (FAR) at that detection threshold setting, and the sample capacity of the pixel (C). Since false alarms in an LMAPD receiver circuit are independent stochastic events whose average rate of occurrence is given by the FAR, Poisson statistics apply, and the probability that at least one unused sample storage location is available at the time the return from the target surface is received is Display Formula

where C=3 is typical of what can fit into a small-pitch ROIC design. To reduce the number of model parameters, one can set ttarget equal to the range gate duration Δtgate, which corresponds to the conservative case of a target surface at the very end of the range gate.

Like PD1, the FAR depends on the detection threshold (nth), but the standard Gaussian approximation for the noise distribution does not accurately model APD noise. Instead, the McIntyre-distributed19 noise of the APD must be explicitly convolved with the Gaussian-distributed noise of the ROIC to find the amplitude distribution of noise pulses into the pixel comparator: Display Formula

where PRX is the amplitude distribution of the noise into the pixel comparator, PROIC is a Gaussian-like discrete distribution that characterizes the pixel circuit noise, and PAPD is the average of McIntyre distributions for the multiplied output of an APD given a certain number of primary input electrons, weighted by the probability that each quantity of primary electrons will result from dark current and background photocurrent generation processes as calculated by Poisson statistics. The convolution is best performed numerically. Figure 9 shows the convolutions calculated for mean APD gains of M=5, 10, 15, and 20 for a k=0.2 InGaAs/InAlAs APD pixel in the large DAS case and compares the convolutions to Gaussian approximations having the same mean and variance. While correspondence is fairly close near the mean, tail divergence is a significant factor for FAR calculations owing to the need to set a detection threshold that extinguishes the great majority of false alarms. Following Rice,20 the FAR is found from a prefactor that depends on the pixel signal chain’s BW, noiseROIC+dark+background, and the value of the convolution at the comparator threshold (nth): Display Formula

Graphic Jump Location
Fig. 9
F9 :

Example convolutions of multiplied APD dark current and background photocurrent shot noise with circuit noise, compared to Gaussian approximations having the same means and variances, for 25-mm aperture large DAS case.

Graphic Jump Location
Fig. 10
F10 :

Probability of achieving 25-cm range precision in one laser shot, against bare earth (C=3, dashed) and with two-return (C=2, dotted) or three-return (C=1, solid) foliage penetration, using M=20 LMAPD and the full format mode.

In addition to influencing the arming probability Pready, the FAR also determines the probability of a false positive. In the large DAS case for which a single range measurement is required to achieve the specified range precision, Poisson statistics give the probability of at least one false positive occurring within the range gate of a given pixel as follows: Display Formula


In the small DAS case, where multiple pulse returns must be averaged to reduce the standard error of the mean range measurement, the coincidence of returns from the same range can be used to reject false alarms. If a validation rule of the form “Nvalid returns within ±terror of a given time-of-arrival” is applied, the probability of at least one false positive consisting of at least Nvalid time-coincident false alarms occurring anywhere within the range gate, over S total laser shots, is Display Formula

where the probability of at least Nvalid false alarms out of S laser shots occurring within any given 2terror time span is Display Formula
and the probability of at least one false alarm occurring within any given 2terror time span per shot is as follows: Display Formula

This is similar to the calculations we used for GMAPDs with a low probability of detection on a single pulse. To summarize, the probability of successfully measuring range to the required precision depends on the number of laser shots transmitted (S), the number of range measurements required to achieve that precision (NRmin), and the per-shot detection probability (PD1). The number of range measurements required depends on the signal strength (nsignal), as does the per-shot detection probability. PD1 also depends on the probability that the ROIC pixel’s sample capacity has not filled up with false alarms by the time a valid target return arrives, on the detection threshold (nth), and on the total noise (noisetotal). The total noise includes a component that depends on the signal strength and a component that is present in the absence of the signal (noiseROIC+dark+background). The analysis is completed by calculation of the FAR, which depends on noiseROIC+dark+background and nth. For a given value of nsignal and S, nth can be varied to maximize PD1. The maximum value of PD1 is then compared to the critical value of PD1 required to achieve a particular probability of measuring range to the required precision (e.g., 90%), and nsignal is adjusted until the critical value is just barely reached. This determines the required signal strength at the ROIC pixel input. To translate nsignal into photons per pixel at the FPA (i.e., after collection by the camera aperture and any losses in the optical train), one divides by the product of the mean APD gain (e.g., M=20), the APD’s QE (80%), and the fill factor of the detector pixel (e.g., 70%).

The radiometric model described in an earlier section is then used to backcalculate the transmitted laser pulse energy required to achieve the necessary signal level at the FPA under different scenarios (bare earth, foliage poke through, grayscale, etc.). Although multiple laser shots can be used for foliage poke through, as with a GMAPD, the very short (nanosecond) reset time of LMAPD pixels enables multihit lidar with a single laser shot if the ROIC can store multiple pulse returns.

Figure 10 is a plot of the probability of achieving 25-cm range precision using an M=20, k=0.2, QE=80%, fill factor=70% LMAPD detector array operated at 0°C with the low-BW ROIC configuration. The optical background for the 2.5-mrad DAS (worst case) scenario was used. Curves corresponding to C=3 (single hit), C=2 (two-hit, single-shot foliage poke through) and C=1 (three-hit, single-shot foliage poke through) are plotted. The minimum signal level for which there is a 90% chance of ranging to 25-cm precision, against bare earth (C=3), is 62 photons when the APD pixel operates at M=5, 53 photons for M=10, 56 photons for M=15, and 61 photons for M=20. The optimal gain is lower than the maximum gain in this scenario because of the strong background.

Figure 11 is a plot of the probability of achieving 5-cm range precision in S=7 laser shots using an M=20, k=0.2, QE=80% LMAPD detector array operated at 0°C with 70% optical coupling efficiency in combination with the low-BW configuration ROIC. Curves corresponding to C=3 (single hit; blue), C=2 (two-hit, single-shot foliage poke through; green) and C=1 (three-hit, single-shot foliage poke through; red) are plotted. The steps in the curves occur at signal levels, where the minimum number of range measurements that must be averaged to achieve the specified range precision, NRmin, changes by an integer, as calculated in Eq. (19). For example, the probability of detecting seven out of seven laser shots at a signal level of 39 photons is much lower than the probability of detecting six out of seven laser shots at a signal level of 40 photons, mainly because the number of required detections drops by 1 (as opposed to the marginally stronger signal return). That is why all three curves drop discontinuously between 39 and 40 photons.

Graphic Jump Location
Fig. 11
F11 :

Probability of achieving 5-cm angle precision in seven laser shots, against bare earth (blue) and with two-return (green) or three-return (red) foliage penetration, using M=20 LMAPD and full format mode.

Figure 12 is a plot of the probability of achieving 5-cm range precision in a single laser shot using an M=20, k=0.2, QE=80% LMAPD detector array with 70% optical coupling efficiency in combination with the high-BW configuration ROIC. Curves corresponding to C=3 (single hit), C=2 (two-hit, single-shot foliage poke through), and C=1 (three-hit, single-shot foliage poke through) are plotted. The 16× difference in coverage between the high- and low-BW ROIC configurations should be considered when comparing this result to the low BW calculation of Fig. 9.

Graphic Jump Location
Fig. 12
F12 :

Probability of achieving 5-cm range precision in one laser shot, against bare earth and with two-return or three-return foliage penetration, using M=20 LMAPD and a high-range precision mode.

The number of laser shots and average signal return levels per shot required to have a 90% probability of ranging to the precisions specified for the near and far target scenarios are summarized in Table 9.

Table Grahic Jump Location
Table 9Photons required per pixel and per shot for each of the cases.

The values in Table 9 are the required photons at the focal plane per laser shot and the number of laser shots, per pixel, per stepping of the FPA’s FoV across the scene. The figures given for foliage poke-through include the factor of 1.6× reduction in cross-section for returns from the furthest obscured target surface and account for the higher detection threshold setting needed for multihit-per-shot lidar. In both the low BW, large DAS case and the high BW, small DAS case, a single laser shot is needed to achieve the specified range precision against bare earth and with foliage poke through. In the low BW, small DAS case, the least total energy is required when four laser shots are used against bare earth and three for foliage poke through. When total energy is calculated, the number of times the FPA’s FoV must be stepped to cover the scene will also be taken into account. Both high BW and low BW configurations are listed in Table 9, but, in the summary table of required energy for mapping, we only present data for the configuration that requires least total energy.

ROICs of this architecture are also capable of grayscale range imaging if they are set up to sample and store the pulse return amplitude at the same time that they sample the analog time stamp. In passive imaging systems, the least-significant bit (LSB) of a sensor’s dynamic range is normally mapped to its noise floor, such that 6 bits of grayscale imaging would span the range from 1× to 64× the noise-equivalent input level. Passive imaging also assumes natural scene illumination. However, because the flash lidar architecture considered here uses an event-driven amplitude sampling scheme, pulse return amplitudes weaker than the comparator threshold will not be sampled. Furthermore, the ROIC’s amplifier chain is usually AC coupled, so natural continuous-wave (CW) scene illumination will not trigger sampling except through its contribution to the FAR. Grayscale imaging with such a ROIC is active imaging of the reflected laser pulse intensity. As such, the granularity of the grayscale image is still the noise-equivalent input level of the sensor, but the dynamic range spanned is offset from zero by the detection threshold. By the same token, the dynamic range available for grayscale imaging is smaller than the dynamic range of the signal chain into the threshold comparator.

The grayscale resolution of a conventional passive imager is often expressed as a dynamic range in bits, which is calculated from the camera’s analog dynamic range by equating the LSB to the camera’s noise floor. However, optical signal shot noise increases as the square root of signal level, so an LSB, which represents the noise at zero signal (i.e., in the dark), does not quantify the accuracy with which nonzero signal amplitude can be measured, nor is it possible to define an LSB of a fixed size that exactly expresses signal amplitude measurement accuracy for all signal levels within an imager’s dynamic range. By contrast, this paper quantifies grayscale resolution based on there being a 90% probability that any given signal return amplitude measurement lies within a set interval centered on the average return level corresponding to the true target reflectance. The signal interval for which the calculation is made is that spanned by a reflectance bin of specified width.

Equation (11) for the mean signal return level in photons per pixel can be rewritten as N=C(R)×ρ, where C(R) is a range-dependent function containing the radiometric aspects of the problem and ρ is the target reflectance, which runs between ρlow=5% and ρhigh=15%. If the range spanned by the target reflectance is divided into Nbits range bins, the reflectance bin width is Δρ=(ρhighρlow)/2Nbits. The mean signal range spanned by a reflectance bin is therefore