|
1.INTRODUCTIONIn order to measure the spatial distribution of anthropogenic greenhouse gases in the air, satellite-based instruments can be used. The concept is to spectrally analyze the solar rays scattered at the earth’s surface. Typically, a push-broom grating spectrometer concept is used in which a line on the earth’s surface is imaged on the spectrometers entrance slit and analyzed spatially in one dimension. A two dimensional map can be generated with the line set across the direction of movement of the satellite1,4. The accuracy requirements for the spectral measurements are very high. Subpixel inhomogeneities of the light intensity in the input image of the spectrometer - as in the case of a transition from water to soil - lead to a change in the instruments spectral response function (ISRF). As a result, the spectrum is shifted and the determination of the gas concentration becomes inaccurate2,3. To prevent this, fiber-based slit devices can be used to homogenize the light distribution in front of the entrance slit for each pixel column. We performed measurements to determine the homogenization behavior of different fiber arrangements and geometries under different conditions. The investigated devices have been manufactured by CeramOptec. The single fibers have silica cores and a size of 100x100 µm and 300x100 µm with a length of 10 cm, 1 m and 10 m each. The arrays are stacked fibers with the same geometry and a length of 2 cm and 4 cm. The arrays with rectangular cores consist of 80 cores and the arrays with square cores of 240 cores. The measurements have been performed at 770 nm (NIR) and 1620 nm (SWIR). By using fibers, problems are introduced which have also been examined. This includes the accuracy in manufacturing and positioning of the fibers, broadening of the light cone (focal ratio degradation: FRD) due to the non-perfection of the fibers and energy losses. However, compared to mirror based homogenizers there is the advantage, that partial depolarization of the incoming light is achieved. 2.GEOMETRY MEASUREMENTSIn a first step, the geometries of the input and output surfaces for single fibers and slit assemblies (arrays) have been characterized. The following parameters have been taken into account: width and height of the actual core, cladding thicknesses and the centering of the core for the single fibers (Figure 1) and core to core distance, across slit center variation and the rotational alignment for arrays (Figure 2). Two different microscopes have been used for the measurements. A modified Zeiss Axiovert 10 with a 4x/0.1 microscope objective lens is used for the investigation of the individual fibers. Measurements with an USAF target show that a resolution better than 4 µm is achieved (1/25 of the core size). Image distortions for the image sensor objective combination are determined and corrected for the images using camera calibration based on OpenCV and a chessboard target. Since only two cores can be placed in the field of view of the Axiovert an additional Keyence VHX-500F microscope with a VH-Z100UR objective is used for the measurement of some of the array parameters. An image of the complete fiber array is created by stitching of 8 images. The results for the single fibers are determined by hand (Table 1). Table 1.Results of the geometry measurements of the single fibers. For each fiber the input and output surface is measured.
Since the arrays consist of 80 and 240 cores, the evaluation has been done automatically using image processing. For the high resolution images which are used to determine the width and height of the fiber, the following approach has been used:
For the low resolution, large field of view images the following methods have been used:
Some cores have not been automatically detected by the software, but at least 90% of the cores have been taken into account. The results are shown in Table 2. Table 2.Results of the geometry measurements of the fiber arrays. For each array the input and output side is measured. The values labeled with “std.” devotes standard deviation of all cores. Maximal deviations are labeled “max”. The measurement uncertainties for the high resolution measurements are estimated to be better than 1 µm and for the low resolution measurements (core to core distance d, length l and across slit position s) better than 4 µm.
In conclusion, it can be said that: 3.LIGHT HOMOGENIZATION EFFICIENCYThe schematic setup to measure the homogenization capability of the different devices is shown in Figure 3. The goal is to illuminate the fiber input with different scenes with varying homogeneity - with the later prevailing conditions - and observe the homogeneity of the near-field intensity at the fiber output. The setup consists of three parts: the illumination system, imaging of the scenes/mask onto the homogenizer input and imaging of the homogenizer output onto the main image sensor. Each lens of the setup consists of an achromatic lens pair and everything is designed to allow large tolerances in lens production and alignment of the setup. Optical simulations and measurements are carried out to prove the diffraction limitation of the imaging systems. An F-number of 3.4 is set for the entire system by the three apertures. 3.1Illumination systemGeneral coherence requirements have to be considered for the illumination. Due to the high resolution of the spectrometer (0.01 - 0.55 nm) temporal coherence prevails1. The spatial coherence width w can be approximated according to the Van Cittert-Zernike theorem: With a ground resolution cell diameter a of 100 m and a satellite height z of 800 km this results in a coherence width of 16 mm for a wavelength λ of 1000 nm. Since the entrance pupil of the imaging telescope is much larger we can assume spatial incoherence for this imaging condition. These illumination requirements are achieved for the measurement setup by using a laser as light source (temporal coherent) in combination with a fixed and a rotating diffuser (destroys spatial coherence). Two different tunable fiber-coupled lasers from Sacher Lasertechnik GmbH (Model TEC 500) are used. They are set to a wavelength of 770 nm and 1620 nm. The rotating diffuser behaves like many statistically independent emitters. A Fourier geometry (lens L1, f1 = 100 mm, image-sided telecentric) is used to homogenize the light as seen in Figure 3. Each point on the diffuser contributes to the illumination of the entire mask. Measurements have been made to confirm the spatial incoherence and homogeneity of the illumination system. 3.2Illumination conditionsThe fiber input is illuminated with the scenes shown in Figure 4. These represent different scenarios. In scene 1, the input is illuminated homogeneously as it is the case when flying over a homogeneous area. Scene 2 consists of a gradient along the flight direction (along track: ALT). This represents the case in which the area mapped by the satellite changes from a non-reflective to a highly reflective surface within an integration time as it is the case, for example, with the transition from land to water. Scene 3, on the other hand, corresponds to the case where one half of the fiber is illuminated and the other half is not (step across track: ACT). Scene 4 is a combination of scene 2 and 3. The examples are shown for rectangular fibers, but the same scenes also apply scaled to the squared fibers. The scenes are created by imaging the mask demagnified (1/15) onto the fiber input. For the mask a chromium on glass plate is used which is processed in-house using photolithography. The scenes are created with a binary pattern (dithering) with a resolution higher than the resolution of the imaging system in the measurement setup. Each fiber and scene is measured with different polarizations, which are defined by a polarizer in front of the fiber (polarizer 1) and a second one in front of the image sensor (polarizer 2). Measurements with the combination of horizontal and vertical polarizations (input and output) and one without polarizers at all are performed resulting in 5 measurements per fiber and scene. 3.3Imaging of the mask onto the homogenizer deviceThe system for imaging the mask onto the homogenizer device consists of lens L2 (f2 = 150 mm) and lens L3 (f3 = 10 mm, double sided telecentric). The image sensors 2 and 3 in combination with the lenses L4 and L5 and the first beam splitter see the mask (sensor 3) and the fiber input (sensor 2). They are used to align the focus of the lenses and position the scenes. The position of the mask can aligned in three dimensions. 3.4Imaging of the homogenizer device onto the main image sensorThe second imaging system is also double sided telecentric and consist of lens L6 (f6 = 20 mm) and lens L7 (f7 = 200 mm) resulting in a 10x magnification. Image sensor 4 in combination with lens L8 and the second beamsplitter are again used for alignment purposes. For the NIR measurements, a scientific CMOS sensor (pco.edge 3.1) is used as the main image sensor (1). In the SWIR band a Raptor Photonics Ninox 640 VIS-SWIR imager is used. The requirements concerning radiometric accuracy are strong. Extensive calibration of the sensors concerning fixed pattern noise contributions are necessary. In contrast to the EMVA1288 standard5, the setup used here is based on a Fourier geometry and image-sided telecentricity. This leads to the same noise behavior as for the final setup. Both sensors show very good noise characteristics (photo response non-uniformity PRNU < 0.5 %) for their range of sensitivity. However, the Raptor Photonics image sensor shows some dead pixels (114 of 325 280) as it is typical for SWIR sensors. For the method used later to determine the homogenization quality, these peaks must be eliminated. This is done with image processing using OpenCV. Therefore, first a copy of the image is created to which an erosion operation (kernel: round 3x3 pixel) and then a dilation operation (kernel: round 7x7 pixel) is applied. This way a dead pixel free image is created, which is then used to replace the areas of the original image containing dead pixels. The steps are shown in Figure 5 using a small section of a homogeneously illuminated image with enhanced contrast. 3.5SWIRThe image sensors 2,3 and 4 which are used for alignment are not sensitive for the SWIR wavelength. Therefore, a HeNe laser with a wavelength of 633 nm is used for alignment. A fiber coupler allows switching between the wavelengths. The wavelength-dependent behavior of the lenses must be taken into account. Therefore, the lenses L3 and L6 are mounted on a bracket that can be moved axially in the micrometer range. A Zemax simulation is used to determine how much the lenses have to be moved when changing from 633 nm to 1620 nm. Since the effect barely depends on the lenses L2 and L8, which have a high focal length, the telecentricity is sufficiently maintained. Some components - such as the beam splitter – have been exchanged for the SWIR measurements with components coated for this wavelength. This way most unwanted reflections and interferences are suppressed. However, interference effects caused by the image sensors cover glass remain as seen in Figure 6a. Since these interferences are spatially constant, they can be eliminated using the method shown in Figure 7. The image sensor (without lens) is mounted on a vertically movable stage. Images are taken for different heights of the sensor with a displacement of multiples of the pixel pitch. The image of the fiber moves over the sensor area and thus appears at a different position compared to the interference pattern for each image. Image processing is then used to digitally shift the captured images back considering the difference of the sensors height. This is possible since the accuracy of the stage is high compared to the pixel pitch. Averaging of the shifted images results in an interference-free image as shown in Figure 6b. The remaining vertical patterns are caused by the fiber and are to be measured. Since the SWIR image sensor has a worse PRNU compared to the NIR sensor the method described here offers a further advantage: for each position the fiber is imaged by different pixels. Therefore, the impact of the PRNU is reduced. For the measurements, 20 positions with 20 time averages each are measured. Additionally a background image is always subtractet to further improve the image quality. 3.6ResultsSome example images of different fibers and input illuminations are shown in Figure 8. In total, more than 1000 measurements (different devices, cores, polarizations, wavelengths) have been performed. In general, the following results have been obtained:
4.DEPOLARIZATIONOne additional advantage of using fibers is their depolarization effect. Depolarization improves the accuracy of the spectrometer or even eliminates the need of additional components. The degree of depolarization of the devices has been investigated with the setup described in section 3. Therefore, a homogeneous illumination of the fiber input (scene 1) is used. The polarizer in front of the device is rotated in 10 degree steps. For each position, a maximum (index: max) and a minimum (index: min) irradiance E at the image sensor is determined by rotating the second polarizer (analyzer). A often used parameter for describing the depolarization is the so called degree of polarization6 D: The irradiance is defined here as the sum of all gray values w of the image sensor divided by the integration time t: This is possible since the image sensors have a good linearity. To minimize errors caused by background light and noise, a dark image is always subtracted. The results for NIR (blue) and SWIR (orange) are depicted in Figure 9. As expected the degree of depolarization increases with the fiber length. Thus, almost no depolarization is present with the short square fiber 1. In contrast, the 10 m long fibers 3 and 6 depolarize so strong that no more measurements can be made for SWIR, since there is no difference between the angular positions of the analyzer. Another visible effect is the asymmetry of the results regarding the input polarization. The cores have always been aligned horizontally. 5.FOCAL RATIO DEGRADATIONAn unwanted effect caused by fiber-based homogenization devices is the broadening of the light cone, which is named focal ratio degradation (FRD)7,8. The FRD must be taken into account when designing the spectrometer optics. In this study, we examine the numerical aperture (NA) of the beam emitted by the fiber compared to the irradiated beam. The approach described in [7] to measure the FRD using collimated light at different angles and observing the far field annulus does not work here. The method is assumed to work only for round fibers. Another problem is that it is not possible for the array assemblies to illuminate only one core with collimated light. Therefore another method has been developed, which is described here. The schematic of the setup is shown in Figure 10. Two fiber coupled laser with a wavelength of 633 nm and 1620 nm have been used. First, the laser beam is collimated with lens L1. Since the lens is smaller than the beam diameter at this point, it also acts as an aperture, resulting in a more homogeneous intensity distribution. The beam then passes through a variable aperture and a polarizer and is diffraction limited focused onto the center of the fiber core with lens L2. The main image sensor is placed in a fixed distance l to the fiber output surface. Image sensor 2 in combination with L3 are used for the alignment of the fiber core and focus position. The input numerical aperture NAin is defined by the aperture diameter and the focal length of the lens L2. The numerical aperture of the light leaving the fiber NAout is determined with the image sensor. An example image is shown in Figure 11a. The diameter of the spot – which is needed to determine the NA – cannot be obtained easily due to strong interference effects (coherent light is used). The approach described here provides the most relevant result for the optical design of a spectrometer. It describes the NA cone in which a defined percentage of the energy is emitted. To calculate the cone, first a dark image - which is taken without the laser - is subtracted from the image. This way the influence of sensor noise and ambient light is reduced. Then the center point of the spot is determined using a broad Gaussian blur filter. For each pixel a NA value is calculated under which the emitted light hits it, which is illustrated in Figure 11b. Only the distance l and the length r, which is the distance from the center point to the pixel is required. A histogram like diagram is created by binning the NA values and adding up the corresponding energy of the pixels. This is shown in Figure 12a as an example. The normalized height of each bar stands for the energy that is emitted in the NA range represented by the width of the bar. By cumulating the energy from low to high NAs a graph as shown in Figure 12b is created. It shows how much energy is emitted within a cone with the NA shown on the x-Axis. A final NAout is obtained by the intersection of the curve with a defined limit. For this example the orange lines shows that 90 % of the total energy is emitted in a cone with a NAout of 0.18. Measurements with different aperture settings give the result for a 90 % limit presented in Figure 12c and Figure 12d. The solid lines show the 633 nm and the dashed lines the 1620 nm results. A perfect result, which means an unchanged cone divergence, is indicated by the grey dashed line. It turns out that:
6.CONCLUSIONThe measurement campaign shows that fiber-based devices are well suited for imaging spectroscopy. A spatial homogenization of the light intensity is achieved for both investigated wavelengths and with all devices. Due to remaining coherence effects, a better result is obtained for the NIR wavelength compared to the SWIR wavelength. Some fibers show small dips and peaks as well as gradients across the core that must be considered. Compared to mirror-based devices, they have the advantage that they depolarize, which increases with the fiber length. Furthermore, fiber-based devices are less limited in length and can be bent. During design, attention must be paid to deviations in geometry, transmission losses and broadening of the beam cone. REFERENCESSierk B., Caron J., Löscher A., Meijer Y. Bézy J.-L., Buchwitz M., Bovensmann H.,
“The CarbonSat candidate mission: imaging greenhouse gas concentrations from space,”
in Proc. SPIE,
9218F-1
–9218F-16
(2014). Google Scholar
Guldimann B, Minoglou K.,
“Smart slit assembly for high-resolution spectrometers in space,”
in Proc. SPIE,
97540B-1
–97540B-10 Google Scholar
Caron J, Sierk B., Bezy J.-L., Loescher A., Meijer Y.,
“The CarbonSat candidate mission: radiometric and spectral performances over spatially heterogeneous scenes,”
in Proc. SPIE,
105633J-2
–105633J-9
(2014). Google Scholar
Yokoya N., Miyamura N. and Iwasaki A.,
“Preprocessing of hyperspectral imagery with consideration of smile and keystone properties,”
in Proc. SPIE,
78570B-1
–78570B-9
(2010). Google Scholar
European Machine Vision Association,
“EMVA Standard 1288 – Standard for Characterization of Image Sensors and Cameras,”
(2016). Google Scholar
Burns W. K., Moeller R. P., Chen L.,
“Depolarization in a Single-Mode Optical Fiber,”
Journal of Lightwave Technology, LT-1,
(1), 44
–50
(1983). https://doi.org/10.1109/JLT.1983.1072087 Google Scholar
Zhang K., Zheng J. R., Saunders W.,
“High numerical aperture multimode fibers for prime focus use,”
in Proc. SPIE,
99125J-1
–991255J-11
(2016). Google Scholar
Avila G.,
“FRD and scrambling properties of recent non-circular fibres,”
8446 84469L-1
–84469L-7
(2012). Google Scholar
Avila G., Singh P.,
“Optical fiber scrambling and light pipes for high accuracy radial velocities measurements,”
7018 70184W-1
–70184W-7
(2008). Google Scholar
|