Spectral and polarization imaging (SPI) is an emerging sensing method that permits the analysis of both spectral and polarization information of a scene. The existing acquisition systems are limited by several factors, such as the space requirement, the inability to capture quickly the information, and the high cost. We propose an SPI acquisition system with a high spatial resolution that combines six spectral channels and four polarization channels. The optical setup employs two color-polarization filter array cameras and a pair of bandpass filters. We define the processing pipeline, consisting of preprocessing, geometric calibration, spectral calibration, and data transformation. We show that, around specular highlights, the spectral reconstruction can be improved by filtering the polarized intensity. We provide a database of 28 spectropolarimetric scenes with different materials for future simulation and analysis by the research community. |
1.IntroductionConventional color image sensors, either charge-couple device or complementary metal oxide semiconductor, acquire images in three different channels, red, green, and blue (RGB). In colorimetry or spectrophotometry, these devices can have inaccuracy due to their low spectral resolution.1 Multispectral imaging (MSI) or hyperspectral imaging (HSI) aims at capturing images with more than three channels across the electromagnetic spectrum. These techniques can enhance the accuracy of the spectral and color estimation of a surface.2 The main difference between MSI and HSI is the number of sensing channels in a given spectral range. HSI typically has a higher spectral resolution but is an expensive technique that suffers from low spatial resolution, and it is mainly encountered in remote sensing applications.3 MSI is a trade-off4,5 that is more appropriate for a more extensive application board, such as medical imaging6 or quality inspection.7 The polarization property is another light attribute that complements the intensity and frequency. Human eyes cannot differentiate different polarization states without extra devices. Polarization provides information about the direction of the transverse electric fields that comprises the electromagnetic radiation. The light can be fully or partially polarized or even unpolarized. The predominant type of polarization can be categorized as linear, circular, or elliptical. Polarization analysis can model the relationship between the polarization state of the light and the physical interaction with matter, after reflection, refraction, or transmission. Polarization imaging offers the possibility of capturing data in 2D and is useful in computer vision, e.g., to separate the diffuse or specular components,8,9 analyze the spatial variation of the index of refraction,10–12 or estimate the surface normals.13,14 Some insects such as crickets or butterflies developed a vision system to perceive both spectral and polarization signals. This capacity helps them to catch their prey or with navigation.15 The combination of both modalities has gained interest in various applications, such as canopy and ice detection,16,17 medical imaging,18 or object tracking.19 Spectral polarization imaging (SPI) also plays an effective role in imaging over critical weather or underwater conditions.20,21 Based on a recent review that we conducted,22 snapshot SPI acquisition systems have been categorized as a division of amplitude23–25 or polarization grating devices.26–28 Table 1 summarizes the recent practical devices from the state-of-the-art systems. Many methods have significant complexity and require either modulation processes, high-cost optics, or moving parts that reduce the sampling resolution. Some imaging setups are also often big to accommodate for the optical paths. We believe that a high resolution and simpler spectropolarimetric imaging system is still to be investigated. Table 1Comparison of the existing and proposed snapshot SPI systems.
Note: QWP, quarter-waveplate; WP, wollaston prism; PFA, polarization filter array; CPFA, color polarization filter array; CFA, color filter array. Polarization filter array (PFA) and color polarization filter array (CPFA) sensors, such as the Sony IMX250 MZR or IMX250 MYR,33 have been integrated by several camera manufacturers in affordable commercial devices. However, these sensors do not have sufficient spectral resolution for accurate spectral sensing as they capture only three spectral channels. Based on these devices, this paper proposes a compact and cost-effective setup capable of a snapshot acquisition of multispectral polarization data. Compared with other snapshot systems presented in Table 1, this work has the advantage of a high spatial resolution, with a relatively good spectral resolution in the full visible range. The paper is organized as follows. In Sec. 2, we present the hardware and the preprocessing steps of the developed prototype. In Sec. 3, we define the procedure to calibrate the spectropolarimetric data. We then describe the transformation of the data, before providing a discussion in Sec. 5 and a conclusion in Sec. 6. 2.Hardware and Preprocessing2.1.HardwareThe proposed SPI setup consists of two Triton TRI050S-QC 5.0 MP color polarization cameras, manufactured by Lucid Vision Lab. These cameras embed the SONY IMX250 MYR sensor,33 the properties of which are shown in Fig. 1(a). This is a division-of-focal-plane sensor, in which each pixel senses one spectral channel and one polarization channel. The filter arrays are composed of three spectral channels () spatially organized in a Quad-Bayer arrangement34 and four polarization channels () arranged in a Chun pattern35 [see Fig. 1(b)]. As shown in Fig. 1(d), the two cameras are in a stereo configuration. In front of each camera, a lens with a fixed focal length of 12 mm is added. They are configured not to perform any analog/digital gain or white balance corrections, to ensure that the raw data are linear and not corrupted by any preprocessing. MSI usually requires complex and expensive acquisition devices. One can reduce the complexity by assuming that the reflectance spectrum is smooth across the visible wavelengths and thus by limiting the number of spectral channels. Examples in the literature demonstrate that MSI in the visible can be achieved using six channels with a relatively good spectral estimation.36 A practical method is to use conventional RGB cameras combined with a set of spectral bandpass filters.1,37,38 This is usually done in multiple shots, placing one filter at a time. We use a similar technique by combining each of the two cameras with a fixed bandpass filter, so the two cameras have different spectral sensitivities. The first bandpass filter is a blue–green BG39 filter (noted ), and the second one is a yellow GG475 filter (noted ), both manufactured by Schott Glass Technologies. We selected the same filters as those of Berns et al.37 The bandpass filters are directly mounted on the objective lenses. From these combinations, we obtain six spectral channels in the visible range. Figure 2 shows the spectral sensitivities of the global system, along with the acquisition configuration. A uniform, unpolarized light source is positioned approximately normal to the base of the configuration. In the next sections, we will describe the processing of the images from the setup. The steps are shown in Fig. 3. 2.2.PreprocessingFirst, two preprocessing are applied to each of the raw images: a demosaicking algorithm and a flat-field correction. 2.2.1.DemosaickingCPFA images have a resolution of , with each sensor pixel sensing only one spectral band among three and one polarization direction among four. Similar to color filter array (CFA), CPFA images need their spatial resolution to be reconstructed to avoid misinterpretation in channel registration for image analysis. This is especially important because CPFA images are composed of 12 channels, and thus the spatial distribution of channels is more sparse compared with that of CFA. Color and/or polarization artifacts may occur at the edges of objects. A dedicated demosaicking algorithm is therefore required to recover the incomplete color and polarization samples per pixel position. To this end, we employ the recent state-of-the-art demosaicking algorithm dedicated to CPFA, which is the Edge-aware residual interpolation.39 It is adapted to the specific spatial arrangement of the Sony IMX250 MYR sensor. The full-resolution output is thus a 12-channel image. We call the pixel response , where indexes the channel with and . 2.2.2.Flat-field correctionThe linearity behavior of the cameras was verified using the procedure described in Ref. 40, and we found that there is no need to correct for nonlinearity. Nevertheless, fixed-pattern noise, i.e., the dark noise and the spatial nonuniformity of the illuminant, has to be characterized and corrected. The dark noise is measured by capturing an image with the cap on the camera (called the dark image), producing a dark uniform field. For correcting the spatial nonuniformity of the lighting, an image of the Xrite ColorChecker white balance chart is acquired (called the white image), which produces a white uniform field. The flat-field correction method is from Hardeberg41 and is implemented in the Colour toolbox from Westland et al.40 The correction is done for each pixel of the demosaicked image and for each channel as follows: where and are the pixel responses of channel in the white and dark images, respectively, and and are the mean values over all pixels in the white and dark images, respectively.3.CalibrationTwo calibrations are done after preprocessing: (1) a geometric stereo calibration to establish the correspondence between the image pair from the two cameras and (2) a spectral calibration to estimate the reflectance of the scene. (Using the polarimetric characterization in Ref. 42, we found that the polarization measurement is near to ideal for these sensors (very high extinction ratios of the micropolarizers), so there is no need to calibrate polarimetrically.) 3.1.Geometric Stereo CalibrationThe stereo camera calibration aims at finding a geometric transform between two cameras in a world coordinate, which is mathematically modeled by a projection matrix. The projection matrix consists of a pair of intrinsic and extrinsic matrices, which are relative to the camera position, orientation, focal length, and lens distortion coefficients. The determination of these parameters is done by taking multiple image pairs of a checkerboard with known dimensions and oriented at different angles. Then, a rectification step projects the images onto a common image plane in such a way that the corresponding points are located on the same row.43 This is followed by the disparity map computation through semiglobal matching44 to find the displacement between conjugate pixels in the stereo image pair. The channel is used for stereo matching and the disparity map computation. After matching, we recombine the 12 channels of the left and right cameras (with and filter, respectively) to construct an image with a total of 24 channels. With this optical configuration (baseline, working distance, and focal length), the usable spatial resolution is (2.6 MP). We note that the pixel response , where , indexes the spectropolarimetric channel with and . 3.2.Spectral Calibration3.2.1.Spectral reconstruction methodThe spectral properties of surfaces are approximated by a few basis vectors and described by a low-dimensional linear model.45 Here, we describe the spectral calibration to estimate the reflectance data of a scene from the vector containing the digital values . We use the method based on linear regression that links the spectral reflectance to the camera responses directly46 as follows: where the matrix is a reconstruction operator and is a vector containing the measured intensities by the spectral system. In our case, is of size and contains the camera intensities , where . This is equivalent to calculating the first Stokes component47 from as follows:Here, is estimated using a set of reference spectra, which are previously measured by a spectrometer. We selected the Xrite Macbeth ColorChecker PassPort (MCCPP), which is composed of two main charts with known spectra: (1) the ColorChecker classic chart used for training [24 patches, see Fig. 4(a)] and (2) the creative enhancement chart used for testing [26 patches, see Fig. 4(c)]. Visualization of the reflectance factors for all patches is shown in Figs. 4(b) and 4(d). We assume that the training set is representative enough of the surface reflectances, which will be estimated a posteriori by the system. Another assumption is that the reflection from the charts is purely diffuse, so they do not reflect any specular component. We call and the set of known training and testing reflectance data, respectively, that are obtained from the Spectral Library of Chromaxion website.48 The calibration of is done using the pseudoinversion as follows: where the + operator is the right Moore–Penrose pseudoinverse operator, is a matrix formed by column vectors containing averaged camera signals (over a area) of the 24 patches, is a matrix with the reference reflectance factors of the patches from 380 to 730 nm with a step of 10 nm, and is a matrix that is an estimate of .3.2.2.Spectral estimation evaluationTo evaluate the performance of the spectral estimation, we apply the trained linear model to a set of camera responses from the creative enhancement chart [see Fig. 4(c)]. Thus, the spectral estimation is done on a set of 26 patches, different from the set used for training. The computation is performed on , which is a matrix containing the averaged camera responses, as To quantify the reconstruction error, we compute the goodness of fit coefficient (GFC) for each estimated patches with reflectance vector . This gives a scalar value, computed as follows: where is the total number of samples in the considered wavelength range and is the two-norm of a vector. A GFC of 0 indicates a bad fit, whereas 1 is a perfect fit.Figure 4(e) shows the comparison between the estimated and measured (reference) reflectance spectra of the 26 patches of the creative enhancement chart. We can see that the estimated spectra are close to the reference data. The worst GFC values occur especially when high reflectances are encountered (patches 1, 7, and 8 from the first line of patches). This can be due to the relatively low sensitivities (and thus a low SNR) of our system in this range of wavelengths. As a comparison, we computed the GFC on the test chart using either three channels () or six channels. We found that our six-channel system increases the GFC by an amount of 0.16 in average over the 26 patches. 4.Data Transform and Database4.1.Data TransformHere, we transform the calibrated data using three selected representations: (1) Stokes formalism, (2) reflectance computed from polarization filtered intensities, and (3) color image. 4.1.1.StokesThe Stokes parameters are determined from an average of intensity measurements over area, wavelength, and solid angle. If we consider a multispectral system with relatively narrow bands, the spectral dependence of polarization information can be considered to be an additional useful information so that each channel senses a different Stokes vector.49 Thus, from , we estimate the Stokes vector at each pixel position and for each spectral channel as where is the intensity component computed from Eq. (3), is the difference between intensities measured through 0 deg and 90 deg polarizers, and is the difference of intensities through 45 deg and 135 deg polarizers. is not considered in this work because we can sense only linear polarization.The degree of linear polarization (DOLP) represents the amount of linear polarization in the light beam. It takes a value between zero for nonpolarized light and one for totally linearly polarized light, with intermediate values referring to partial polarization The azimuth angle of linear polarization (AOLP) is also computed from the Stokes components. It represents the angular orientation of the main axis of the polarization ellipse with respect to the chosen angular reference used for the system: In Fig. 5, we plot the luminance and of the 24 MCCPP patches used for spectral calibration. First, we can see that there is a low but significant polarization signature for the patches. Second, we can see that we have an inverse relationship between luminance and DOLP, especially by looking at the values of the last six neutral patches (19 to 24). It can be explained by the fact that darker patches have a much smaller diffuse component (and thus a higher specular component, which is more polarized) due to a greater absorption of light. This effect is also shown in Fig. 6, in which darker balls exhibit a higher degree of polarization. A visualization of the Stokes transform on the scene called “resin balls” is shown in Fig. 6. 4.1.2.ReflectanceThe reflection of light upon a surface can be analyzed by an additive model, in which the total intensity is a diffuse component (resulting from the subsurface reflection phenomenon) plus a specular component (resulting from the surface reflection). Existing methods for separating reflection components use a polarization filter rotated in front of the image sensor.8,50 It is often assumed that the diffuse component tends to be weakly polarized compared with the specular component. In our case, we believe that the polarization intensity filtering can benefit the reflectance estimation when a significant specular reflection occurs. To this end, we compute the polarization filtered intensities,51 called , by the following equation: This has the effect of removing the polarization component from the total intensity . Thus, we can compute the spectral reflectance for any pixel position from vector instead of vector [as in Eq. (5)]: To verify the effect of removing the polarized intensity component, we compute the reflectance using either or . We selected regions near occluding boundaries of the objects, where specular reflection is assumed to have a greater influence on the DOLP of the reflected light. Results are shown in Fig. 7 for four regions of interest. It appears that we get a better fit when using for the four cases. 4.1.3.ColorFrom by pixel reflectance data , we compute the corresponding International Commission on Illumination (CIE) values as where () are the CIE (1931) color matching functions (CMFs).52,53 Then, we convert the CIE values to sRGB54 for visualization. The to sRGB transform is typically achieved in two steps: a first linear transform of values to linear RGB values, followed by a gamma correction of 2.2 to adapt for nonlinear behavior of monitors.4.2.DatabaseWe provide a database of 28 scenes with different materials, including plastic, wood, and metal. The scene called “chart” is custom plastic patches manufactured with a 3D printer, with a varying layer height parameter. Figure 8 shows the database in sRGB representation, after the color transform from Sec. 4. The brand logos are removed from the database images.The images are cropped to select the relevant content of the objects. The code and database folder is organized as follows: Three MATLAB scripts are provided to transform the data to Stokes, reflectance, and color images, respectively. The dataset and code are available in a GitHub repository via this link55 or can be sent on request to the corresponding author. 5.DiscussionOur stereo setup offers the possibility to perform reflectometry by fusing observations of the polarization state of light reflected off the surface from two different views.51 In a binocular stereo configuration, the specular intensity can be different from the two sensors, depending on the shape of the object to be considered and the lightning configuration. As an example, our stereo setup can be configured to optimize the roughness measurement by modifying the vergence, the working distance, or the focal length of the camera lenses. In the database, this is unconstrained because surfaces in the scene are multiple and unknown. But in the case of measurement on samples with a priori knowledge (roughness range, convexity shape, and index of refraction), the setup can be slightly adapted.56 This is not investigated in this work and will be considered in future work. Some drawback can come from the disparity computation, especially when imaging thick objects. From the stereo point of view, typical problems of matching occur when (1) there are occlusions (no corresponding data in one of the two images), (2) there are specular highlights, or (3) a pixel pertains to a more or less flat region. To be successful, the matching algorithm needs textures (but not periodic); if there is nothing to really match inside an image pair, the algorithm cannot make correspondences efficiently. Another potential factor comes from the use of an image pair taken with different spectral sensitivities.57 This produces a noisy disparity map and thus noisy Stokes/spectral/color representations of data. We believe that an evolved disparity map computation method that is spectrally invariant is still to be developed. From our knowledge, computing the reflectance spectra from polarimetric Stokes data ( and ) has never been investigated in the literature. We verified that polarization information can help the reflectance estimation, whereas previous work assumed that all of the surfaces measured by a spectral camera are mainly diffuse. Some other works used a rotated linear polarizer in front of the spectral camera to filter globally the specular reflection. In our work, this is done by pixel because we have the measurement of the state of polarization for each pixel. Nevertheless, this method is limited by the amount of polarization intensity that is able to be filtered, which varies with the object and lightning configuration, i.e., the angle of incidence/reflection of the light. We believe that a dedicated separation of diffuse/specular components using spectral and polarization, such as in a prior work,8 can further improve the reflectance estimation for specular surfaces. This is not included in this work but can be investigated as an additional processing block to be included in the framework from Fig. 3. This work opens perspectives in several application fields, from the stereo acquisition of spectropolarimetric images to new image processing of the data. The output of our framework, which is transformed data (Stokes, reflectance, and color) can be extended to specific surfaces and be an input to computer vision tasks such as feature extraction using principal component analysis, semantic segmentation, or defect detection. 6.ConclusionWe developed a snapshot spectropolarimetric acquisition system. It has the advantages of a high spatial resolution and a relatively good spectral reconstruction precision. We designed a full processing pipeline that consists of preprocessing, geometric calibration, spectral calibration, and data transform. We also provide a method to better estimate the reflectance of specular surfaces using a filtering process based on the Stokes components. A database of 28 high-resolution spectropolarimetric images is made available online, along with the code to transform the data. The configuration of the proposed system makes it possible to reconstruct the 3D spectral information using two complementary strategies: (1) the polarization information (the so-called “shape from polarization” method) or (2) the stereo depth computation. Image processing using both polarization and stereo information is to be investigated in future work. AcknowledgmentsThe authors want to thank Joël Lambert for the manufacturing of the plastic charts. This work was supported by the ANR JCJC SPIASI project, Grant No. ANR-18-CE10-0005 of the French Agence Nationale de la Recherche. The authors declare no conflicts of interest. ReferencesR. Shrestha, J. Y. Hardeberg and A. Mansouri,
“One-shot multispectral color imaging with a stereo camera,”
Proc. SPIE, 7876 787609
(2011). https://doi.org/10.1117/12.872428 PSISDG 0277-786X Google Scholar
R. S. Berns et al.,
“Multispectral-based color reproduction research at the Munsell Color Science Laboratory,”
Proc. SPIE, 3409 14
–25
(1998). https://doi.org/10.1117/12.324139 PSISDG 0277-786X Google Scholar
S. Sattar, H. A. Khan and K. Khurshid,
“Optimized class-separability in hyperspectral images,”
in IEEE Int. Geosci. and Remote Sens. Symp.,
2711
–2714
(2016). https://doi.org/10.1109/IGARSS.2016.7729700 Google Scholar
P.-J. Lapray et al.,
“Multispectral filter arrays: recent advances and practical implementation,”
Sensors, 14
(11), 21626
–21659
(2014). https://doi.org/10.3390/s141121626 SNSRES 0746-9462 Google Scholar
J.-B. Thomas et al.,
“Spectral characterization of a prototype SFA camera for joint visible and nir acquisition,”
Sensors, 16
(7), 993
(2016). https://doi.org/10.3390/s16070993 SNSRES 0746-9462 Google Scholar
M. Ewerlöf, M. Larsson and E. G. Salerud,
“Spatial and temporal skin blood volume and saturation estimation using a multispectral snapshot imaging camera,”
Proc. SPIE, 10068 1006814
(2017). https://doi.org/10.1117/12.2251928 PSISDG 0277-786X Google Scholar
J. Qin et al.,
“Hyperspectral and multispectral imaging for evaluating food safety and quality,”
J. Food Eng., 118
(2), 157
–171
(2013). https://doi.org/10.1016/j.jfoodeng.2013.04.001 JFOEDH 0260-8774 Google Scholar
S. K. Nayar, X.-S. Fang and T. Boult,
“Separation of reflection components using color and polarization,”
Int. J. Comput. Vis., 21
(3), 163
–186
(1997). https://doi.org/10.1023/A:1007937815113 IJCVEQ 0920-5691 Google Scholar
S. Tominaga and B. A. Wandell,
“Standard surface-reflectance model and illuminant estimation,”
J. Opt. Soc. Am. A, 6
(4), 576
–584
(1989). https://doi.org/10.1364/JOSAA.6.000576 Google Scholar
J. A. Martin and K. C. Gross,
“Estimating index of refraction from polarimetric hyperspectral imaging measurements,”
Opt. Express, 24
(16), 17928
–17940
(2016). https://doi.org/10.1364/OE.24.017928 Google Scholar
V. Thilak, D. G. Voelz and C. D. Creusere,
“Polarization-based index of refraction and reflection angle estimation for remote sensing applications,”
Appl. Opt., 46
(30), 7527
–7536
(2007). https://doi.org/10.1364/AO.46.007527 APOPAI 0003-6935 Google Scholar
C. P. Huynh, A. Robles-Kelly and E. Hancock,
“Shape and refractive index recovery from single-view polarisation images,”
in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit.,
1229
–1236
(2010). https://doi.org/10.1109/CVPR.2010.5539828 Google Scholar
G. A. Atkinson and E. R. Hancock,
“Recovery of surface orientation from diffuse polarization,”
IEEE Trans. Image Process., 15
(6), 1653
–1664
(2006). https://doi.org/10.1109/TIP.2006.871114 IIPRE4 1057-7149 Google Scholar
W.-C. Ma et al.,
“Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination,”
in Proc. 18th Eurogr. Conf. Rendering Tech.,
183
–194
(2007). Google Scholar
G. Horváth et al., Polarized Light in Animal Vision: Polarization Patterns in Nature, Springer Science & Business Media, Berlin, Heidelberg, New York
(2004). Google Scholar
K. Homma et al.,
“Application of an imaging spectropolarimeter to agro-environmental sciences,”
Proc. SPIE, 5234 638
–647
(2004). https://doi.org/10.1117/12.510676 PSISDG 0277-786X Google Scholar
H. Kurosaki et al.,
“Development of tunable imaging spectro-polarimeter for remote sensing,”
Adv. Space Res., 32
(11), 2141
–2146
(2003). https://doi.org/10.1016/S0273-1177(03)90535-7 ASRSDW 0273-1177 Google Scholar
A. Pierangelo et al.,
“Multispectral mueller polarimetric imaging detecting residual cancer and cancer regression after neoadjuvant treatment for colorectal carcinomas,”
J. Biomed. Opt., 18
(4), 046014
(2013). https://doi.org/10.1117/1.JBO.18.4.046014 Google Scholar
Y.-Q. Zhao et al.,
“Object separation by polarimetric and spectral imagery fusion,”
Comput. Vis. Image Understanding, 113
(8), 855
–866
(2009). https://doi.org/10.1016/j.cviu.2009.03.002 CVIUF4 1077-3142 Google Scholar
Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar,
“Instant dehazing of images using polarization,”
in Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit.,
(2001). https://doi.org/10.1109/CVPR.2001.990493 Google Scholar
D. M. Kocak et al.,
“A focus on recent developments and trends in underwater imaging,”
Mar. Technol. Soc. J., 42
(1), 52
–67
(2008). https://doi.org/10.4031/002533208786861209 MTSJBB 0025-3324 Google Scholar
S. Sattar et al.,
“Review of spectral and polarization imaging systems,”
Proc. SPIE, 11351 113511Q
(2020). https://doi.org/10.1117/12.2555745 PSISDG 0277-786X Google Scholar
X. Tu et al.,
“Division of amplitude RGB full-stokes camera using micro-polarizer arrays,”
Opt. Express, 25
(26), 33160
–33175
(2017). https://doi.org/10.1364/OE.25.033160 Google Scholar
T. Mu et al.,
“Snapshot linear-stokes imaging spectropolarimeter using division-of-focal-plane polarimetry and integral field spectroscopy,”
Sci. Rep., 7 42115
(2017). https://doi.org/10.1038/srep42115 Google Scholar
C. Zhang, N. Quan and T. Mu,
“Stokes imaging spectropolarimeter based on channeled polarimetry with full-resolution spectra and aliasing reduction,”
Appl. Opt., 57
(21), 6128
–6134
(2018). https://doi.org/10.1364/AO.57.006128 APOPAI 0003-6935 Google Scholar
J. Kim and M. J. Escuti,
“Demonstration of a polarization grating imaging spectropolarimeter (PGIS),”
Proc. SPIE, 7672 80
–88
(2010). https://doi.org/10.1117/12.849758 PSISDG 0277-786X Google Scholar
J. Kim and M. J. Escuti,
“Snapshot imaging spectropolarimeter utilizing polarization gratings,”
Proc. SPIE, 7086 708603
(2008). https://doi.org/10.1117/12.795719 PSISDG 0277-786X Google Scholar
M. Alouini et al.,
“Near-infrared active polarimetric and multispectral laboratory demonstrator for target detection,”
Appl. Opt., 48
(8), 1610
–1618
(2009). https://doi.org/10.1364/AO.48.001610 APOPAI 0003-6935 Google Scholar
M. Garcia et al.,
“Bio-inspired color-polarization imager for real-time in situ imaging,”
Optica, 4
(10), 1263
–1271
(2017). https://doi.org/10.1364/OPTICA.4.001263 Google Scholar
C. Fu et al.,
“Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector,”
J. Opt. Soc. Am. A, 32
(11), 2178
–2188
(2015). https://doi.org/10.1364/JOSAA.32.002178 Google Scholar
T.-H. Tsai and D. J. Brady,
“Coded aperture snapshot spectral polarization imaging,”
Appl. Opt., 52 2153
–2161
(2013). https://doi.org/10.1364/AO.52.002153 APOPAI 0003-6935 Google Scholar
F. Soldevila et al.,
“Single-pixel polarimetric imaging spectrometer by compressive sensing,”
Appl. Phys. B, 113
(4), 551
–558
(2013). https://doi.org/10.1007/s00340-013-5506-2 Google Scholar
Sony,
“Polarization image sensor,”
(2018). Google Scholar
T. Okawa et al.,
“A 1/2inch 48M all PDAF CMOS image sensor using quad Bayer coding 2×2OCL with 1.0lux minimum AF illuminance level,”
in Int. Electron Devices Meeting,
16
–23
(2019). https://doi.org/10.1109/IEDM19573.2019.8993499 Google Scholar
C. S. Chun, D. L. Fleming and E. Torok,
“Polarization-sensitive thermal imaging,”
Proc. SPIE, 2234 275
–286
(1994). https://doi.org/10.1117/12.181025 PSISDG 0277-786X Google Scholar
A. Alsam, D. Connah and J. Hardeberg,
“Multispectral imaging: How many sensors do we need?,”
J. Imaging Sci. Technol., 50
(1), 45
–52
(2006). https://doi.org/10.2352/J.ImagingSci.Technol.(2006)50:1(45) JIMTE6 1062-3701 Google Scholar
R. S. Berns et al.,
“Practical spectral imaging using a color-filter array digital camera,”
(2006). Google Scholar
J. Klein and B. Hill,
“Multispectral stereo acquisition using two RGB cameras and color filters,”
in 18th Workshop Farbbildverarbeitung,
89
–96
(2018). Google Scholar
M. Morimatsu et al.,
“Monochrome and color polarization demosaicking using edge-aware residual interpolation,”
in IEEE Int. Conf. Image Process.,
2571
–2575
(2020). https://doi.org/10.1109/ICIP40778.2020.9191085 Google Scholar
S. Westland, C. Ripamonti and V. Cheung, Computational Colour Science Using MATLAB, John Wiley & Sons, Inc.,, Hoboken, New Jersey
(2012). Google Scholar
J. Y. Hardeberg,
“Acquisition and reproduction of color images: colorimetric and multispectral approaches,”
(2001). Google Scholar
Y. Giménez et al.,
“Calibration algorithms for polarization filter array camera: survey and evaluation,”
J. Electron. Imaging, 29
(4), 041011
(2020). https://doi.org/10.1117/1.JEI.29.4.041011 JEIME5 1017-9909 Google Scholar
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media, Inc., Sebastopol, California
(2008). Google Scholar
H. Hirschmuller,
“Accurate and efficient stereo processing by semi-global matching and mutual information,”
in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit.,
807
–814
(2005). https://doi.org/10.1109/CVPR.2005.56 Google Scholar
D. H. Marimont and B. A. Wandell,
“Linear models of surface and illuminant spectra,”
J. Opt. Soc. Am. A, 9 1905
–1913
(1992). https://doi.org/10.1364/josaa.9.001905 Google Scholar
D. C. Day,
“Filter selection for spectral estimation using a trichrmatic camera,”
(2003). Google Scholar
G. G. Stokes, On the Composition and Resolution of Streams of Polarized Light from different Sources, 3 233
–258 Cambridge University Press, Cambridge
(2009). Google Scholar
G. Myhre et al.,
“Liquid crystal polymer full-stokes division of focal plane polarimeter,”
Opt. Express, 20 27393
–27409
(2012). https://doi.org/10.1364/OE.20.027393 OPEXFF 1094-4087 Google Scholar
L. B. Wolff and T. E. Boult,
“Constraining object features using a polarization reflectance model,”
IEEE Trans. Pattern Anal. Mach. Intell., 13
(7), 635
–657
(1991). https://doi.org/10.1109/34.85655 Google Scholar
J. Riviere et al.,
“Polarization imaging reflectometry in the wild,”
ACM Trans. Graph., 36
(6), 1
–14
(2017). https://doi.org/10.1145/3130800.3130894 ATGRDF 0730-0301 Google Scholar
J. Guild and J. E. Petavel,
“The colorimetric properties of the spectrum,”
Philos. Trans. R. Soc. London Ser. A, 230
(681-693), 149
–187
(1931). https://doi.org/10.1098/rsta.1932.0005 Google Scholar
W. D. Wright,
“A re-determination of the trichromatic coefficients of the spectral colours,”
Trans. Opt. Soc., 30 141
–164
(1929). https://doi.org/10.1088/1475-4878/30/4/301 Google Scholar
“Multimedia systems and equipment—colour measurement and management - part 2-1: Colour management—default RGB colour space—SRGB standard,”
Geneva, CH
(1999). Google Scholar
“Snapshot spectropolarimetric imaging using a pair of filter array cameras,”
https://www.ensisa.uha.fr/foti-image-databases-polarisation-open#OE2022 Google Scholar
S. Nayar, K. Ikeuchi and T. Kanade,
“Surface reflection: physical and geometrical perspectives,”
IEEE Trans. Pattern Anal. Mach. Intell., 13
(7), 611
–634
(1991). https://doi.org/10.1109/34.85654 ITPIDJ 0162-8828 Google Scholar
R. Shrestha,
“Multispectral imaging: fast acquisition, capability extension, and quality evaluation,”
University of Oslo,
(2014). Google Scholar
BiographySumera Sattar is a PhD student at the University of Haute-Alsace, France. She received her MS degree in electrical engineering from the Institute of Space and Technology, Pakistan, in 2016. Her current research interests include polarization, multispectral imaging, and machine learning. Laurent Bigué received his engineering degree in physics from the Université de Strasbourg in 1992. He received his PhD in optical and electrical engineering from the Université de Haute-Alsace in 1996. He was appointed a professor at ENSISA (ECE Department of Université de Haute Alsace) in 2005 and has been the dean of ENSISA since 2012. His major research interests include optical signal processing, polarimetry, and optical metrology. He is a member of SFO, EOS, OSA, and SPIE. |