Open Access
22 April 2022 Snapshot spectropolarimetric imaging using a pair of filter array cameras
Author Affiliations +
Abstract

Spectral and polarization imaging (SPI) is an emerging sensing method that permits the analysis of both spectral and polarization information of a scene. The existing acquisition systems are limited by several factors, such as the space requirement, the inability to capture quickly the information, and the high cost. We propose an SPI acquisition system with a high spatial resolution that combines six spectral channels and four polarization channels. The optical setup employs two color-polarization filter array cameras and a pair of bandpass filters. We define the processing pipeline, consisting of preprocessing, geometric calibration, spectral calibration, and data transformation. We show that, around specular highlights, the spectral reconstruction can be improved by filtering the polarized intensity. We provide a database of 28 spectropolarimetric scenes with different materials for future simulation and analysis by the research community.

1.

Introduction

Conventional color image sensors, either charge-couple device or complementary metal oxide semiconductor, acquire images in three different channels, red, green, and blue (RGB). In colorimetry or spectrophotometry, these devices can have inaccuracy due to their low spectral resolution.1 Multispectral imaging (MSI) or hyperspectral imaging (HSI) aims at capturing images with more than three channels across the electromagnetic spectrum. These techniques can enhance the accuracy of the spectral and color estimation of a surface.2 The main difference between MSI and HSI is the number of sensing channels in a given spectral range. HSI typically has a higher spectral resolution but is an expensive technique that suffers from low spatial resolution, and it is mainly encountered in remote sensing applications.3 MSI is a trade-off4,5 that is more appropriate for a more extensive application board, such as medical imaging6 or quality inspection.7

The polarization property is another light attribute that complements the intensity and frequency. Human eyes cannot differentiate different polarization states without extra devices. Polarization provides information about the direction of the transverse electric fields that comprises the electromagnetic radiation. The light can be fully or partially polarized or even unpolarized. The predominant type of polarization can be categorized as linear, circular, or elliptical. Polarization analysis can model the relationship between the polarization state of the light and the physical interaction with matter, after reflection, refraction, or transmission. Polarization imaging offers the possibility of capturing data in 2D and is useful in computer vision, e.g., to separate the diffuse or specular components,8,9 analyze the spatial variation of the index of refraction,1012 or estimate the surface normals.13,14

Some insects such as crickets or butterflies developed a vision system to perceive both spectral and polarization signals. This capacity helps them to catch their prey or with navigation.15 The combination of both modalities has gained interest in various applications, such as canopy and ice detection,16,17 medical imaging,18 or object tracking.19 Spectral polarization imaging (SPI) also plays an effective role in imaging over critical weather or underwater conditions.20,21

Based on a recent review that we conducted,22 snapshot SPI acquisition systems have been categorized as a division of amplitude2325 or polarization grating devices.2628 Table 1 summarizes the recent practical devices from the state-of-the-art systems. Many methods have significant complexity and require either modulation processes, high-cost optics, or moving parts that reduce the sampling resolution. Some imaging setups are also often big to accommodate for the optical paths. We believe that a high resolution and simpler spectropolarimetric imaging system is still to be investigated.

Table 1

Comparison of the existing and proposed snapshot SPI systems.

ReferenceOptical instrumentsPolarization modeSpectral rangeSpectral bandsSpatial resolution
Tu et al. 232 CPFA, QWPFull-StokesVisible31.9 MP
Garcia et al.29PFA, Foveon’s vertically stacked photodetectorsLinear-StokesVisible30.9 MP
Fu et al.30PFA, rotated prismLinear-Stokes505 to 650 nm80.07 MP
Kim and Escuti26,27Polarization gratings, QWPFull-Stokes500 to 700 nm510.01 MP
Tsai and Brady31Two birefringent crystals, coded aperture, relay opticsTwo Stokes components400 to 680 nm191.9 MP
Mu et al.24CFA, PFA, optical mask with slits, Amici prismLinear-Stokes450 to 650 nm30.48 MP
Soldevila et al.32Digital micromirror deviceLinear-Stokes490 to 660 nm81 pixel
This workTwo CPFA, two bandpass filtersLinear-StokesVisible62.6 MP
Note: QWP, quarter-waveplate; WP, wollaston prism; PFA, polarization filter array; CPFA, color polarization filter array; CFA, color filter array.

Polarization filter array (PFA) and color polarization filter array (CPFA) sensors, such as the Sony IMX250 MZR or IMX250 MYR,33 have been integrated by several camera manufacturers in affordable commercial devices. However, these sensors do not have sufficient spectral resolution for accurate spectral sensing as they capture only three spectral channels. Based on these devices, this paper proposes a compact and cost-effective setup capable of a snapshot acquisition of multispectral polarization data. Compared with other snapshot systems presented in Table 1, this work has the advantage of a high spatial resolution, with a relatively good spectral resolution in the full visible range.

The paper is organized as follows. In Sec. 2, we present the hardware and the preprocessing steps of the developed prototype. In Sec. 3, we define the procedure to calibrate the spectropolarimetric data. We then describe the transformation of the data, before providing a discussion in Sec. 5 and a conclusion in Sec. 6.

2.

Hardware and Preprocessing

2.1.

Hardware

The proposed SPI setup consists of two Triton TRI050S-QC 5.0 MP color polarization cameras, manufactured by Lucid Vision Lab. These cameras embed the SONY IMX250 MYR sensor,33 the properties of which are shown in Fig. 1(a). This is a division-of-focal-plane sensor, in which each pixel senses one spectral channel and one polarization channel. The filter arrays are composed of three spectral channels (cR,G,B) spatially organized in a Quad-Bayer arrangement34 and four polarization channels (p{0,45,90,135}  deg) arranged in a Chun pattern35 [see Fig. 1(b)].

Fig. 1

The color PFA (CPFA) sensor: (a) sensor characteristics and (b) spatial arrangement of the SONY IMX250 MYR sensor. Each pixel is covered by one linear polarization filter and one spectral filter.

OE_61_4_043104_f001.png

As shown in Fig. 1(d), the two cameras are in a stereo configuration. In front of each camera, a lens with a fixed focal length of 12 mm is added. They are configured not to perform any analog/digital gain or white balance corrections, to ensure that the raw data are linear and not corrupted by any preprocessing.

MSI usually requires complex and expensive acquisition devices. One can reduce the complexity by assuming that the reflectance spectrum is smooth across the visible wavelengths and thus by limiting the number of spectral channels. Examples in the literature demonstrate that MSI in the visible can be achieved using six channels with a relatively good spectral estimation.36 A practical method is to use conventional RGB cameras combined with a set of spectral bandpass filters.1,37,38 This is usually done in multiple shots, placing one filter at a time. We use a similar technique by combining each of the two cameras with a fixed bandpass filter, so the two cameras have different spectral sensitivities. The first bandpass filter is a blue–green BG39 filter (noted bg), and the second one is a yellow GG475 filter (noted y), both manufactured by Schott Glass Technologies. We selected the same filters as those of Berns et al.37 The bandpass filters are directly mounted on the objective lenses. From these combinations, we obtain six spectral channels in the visible range. Figure 2 shows the spectral sensitivities of the global system, along with the acquisition configuration.

Fig. 2

(a) The CPFA camera spectral sensitivities (only the 0-deg polarization channel is shown because the four polarization channels are very similar). The sensitivity is obtained by characterizing the sensor with a spectral resolution of 10 nm from 370 to 750 nm. (b) Spectral transmission of the bandpass filters provided by the manufacturer. (c) Total spectral sensitivities of the cameras after combination. (d) Setup configuration.

OE_61_4_043104_f002.png

A uniform, unpolarized light source is positioned approximately normal to the base of the configuration.

In the next sections, we will describe the processing of the images from the setup. The steps are shown in Fig. 3.

Fig. 3

Practical imaging pipeline.

OE_61_4_043104_f003.png

2.2.

Preprocessing

First, two preprocessing are applied to each of the raw images: a demosaicking algorithm and a flat-field correction.

2.2.1.

Demosaicking

CPFA images have a resolution of 2448×2048  pixels, with each sensor pixel sensing only one spectral band among three and one polarization direction among four. Similar to color filter array (CFA), CPFA images need their spatial resolution to be reconstructed to avoid misinterpretation in channel registration for image analysis. This is especially important because CPFA images are composed of 12 channels, and thus the spatial distribution of channels is more sparse compared with that of CFA. Color and/or polarization artifacts may occur at the edges of objects. A dedicated demosaicking algorithm is therefore required to recover the incomplete color and polarization samples per pixel position. To this end, we employ the recent state-of-the-art demosaicking algorithm dedicated to CPFA, which is the Edge-aware residual interpolation.39 It is adapted to the specific spatial arrangement of the Sony IMX250 MYR sensor. The full-resolution output is thus a 12-channel image. We call the pixel response ρi, where i={c,θ} indexes the channel with cR,G,B and p{0,45,90,135}deg.

2.2.2.

Flat-field correction

The linearity behavior of the cameras was verified using the procedure described in Ref. 40, and we found that there is no need to correct for nonlinearity. Nevertheless, fixed-pattern noise, i.e., the dark noise and the spatial nonuniformity of the illuminant, has to be characterized and corrected. The dark noise is measured by capturing an image with the cap on the camera (called the dark image), producing a dark uniform field. For correcting the spatial nonuniformity of the lighting, an image of the Xrite ColorChecker white balance chart is acquired (called the white image), which produces a white uniform field. The flat-field correction method is from Hardeberg41 and is implemented in the Colour toolbox from Westland et al.40 The correction is done for each pixel of the demosaicked image and for each channel i as follows:

Eq. (1)

ρi=(ρi,w¯ρi,d¯)(ρiρi,d)(ρi,wρi,d),
where ρi,w and ρi,d are the pixel responses of channel i in the white and dark images, respectively, and ρi,w¯ and ρi,d¯ are the mean values over all pixels in the white and dark images, respectively.

3.

Calibration

Two calibrations are done after preprocessing: (1) a geometric stereo calibration to establish the correspondence between the image pair from the two cameras and (2) a spectral calibration to estimate the reflectance of the scene. (Using the polarimetric characterization in Ref. 42, we found that the polarization measurement is near to ideal for these sensors (very high extinction ratios of the micropolarizers), so there is no need to calibrate polarimetrically.)

3.1.

Geometric Stereo Calibration

The stereo camera calibration aims at finding a geometric transform between two cameras in a world coordinate, which is mathematically modeled by a projection matrix. The projection matrix consists of a pair of intrinsic and extrinsic matrices, which are relative to the camera position, orientation, focal length, and lens distortion coefficients. The determination of these parameters is done by taking multiple image pairs of a checkerboard with known dimensions and oriented at different angles. Then, a rectification step projects the images onto a common image plane in such a way that the corresponding points are located on the same row.43 This is followed by the disparity map computation through semiglobal matching44 to find the displacement between conjugate pixels in the stereo image pair. The channel i={G,0  deg} is used for stereo matching and the disparity map computation.

After matching, we recombine the 12 channels of the left and right cameras (with bg and y filter, respectively) to construct an image with a total of 24 channels. With this optical configuration (baseline, working distance, and focal length), the usable spatial resolution is 1330×1920  pixels (2.6 MP). We note that the pixel response ρj, where j={s,θ}, indexes the spectropolarimetric channel with sRbg,Ry,Gbg,Gy,Bbg,By and p{0,45,90,135}deg.

3.2.

Spectral Calibration

3.2.1.

Spectral reconstruction method

The spectral properties of surfaces are approximated by a few basis vectors and described by a low-dimensional linear model.45 Here, we describe the spectral calibration to estimate the reflectance data r of a scene from the vector ρ containing the digital values ρj. We use the method based on linear regression that links the spectral reflectance r to the camera responses I directly46 as follows:

Eq. (2)

r=MI,
where the matrix M is a reconstruction operator and I is a vector containing the measured intensities by the spectral system. In our case, I is of size [1×6] and contains the camera intensities Is, where sRbg,Ry,Gbg,Gy,Bbg,By. This is equivalent to calculating the first Stokes component47 from ρj as follows:

Eq. (3)

Is=ρs,0+ρs,45+ρs,90+ρs,1352.

Here, M is estimated using a set of reference spectra, which are previously measured by a spectrometer. We selected the Xrite Macbeth ColorChecker PassPort (MCCPP), which is composed of two main charts with known spectra: (1) the ColorChecker classic chart used for training [24 patches, see Fig. 4(a)] and (2) the creative enhancement chart used for testing [26 patches, see Fig. 4(c)]. Visualization of the reflectance factors for all patches is shown in Figs. 4(b) and 4(d). We assume that the training set is representative enough of the surface reflectances, which will be estimated a posteriori by the system. Another assumption is that the reflection from the charts is purely diffuse, so they do not reflect any specular component. We call Rtrain and Rtest the set of known training and testing reflectance data, respectively, that are obtained from the Spectral Library of Chromaxion website.48

Fig. 4

(a) ColorChecker classic chart from the X-Rite ColorChecker passport used for spectral calibration. (b) Spectral reflectance of the classic chart from the Chromaxion website.48 (c) Creative enhancement chart from the X-Rite ColorChecker passport used for testing. (d) Spectral reflectance of the creative enhancement chart. (e) Comparison of the spectral estimation of the reflectance from the creative enhancement chart. GFC are shown for each recovered spectrum r^test.

OE_61_4_043104_f004.png

The calibration of M is done using the pseudoinversion as follows:

Eq. (4)

M^=RtrainItrain+,
where the + operator is the right Moore–Penrose pseudoinverse operator, Itrain is a [6×24] matrix formed by column vectors containing averaged camera signals (over a 10×10  pixel area) of the 24 patches, Rtrain is a [36×24] matrix with the reference reflectance factors of the patches from 380 to 730 nm with a step of 10 nm, and M^ is a [36×6] matrix that is an estimate of M.

3.2.2.

Spectral estimation evaluation

To evaluate the performance of the spectral estimation, we apply the trained linear model to a set of camera responses from the creative enhancement chart [see Fig. 4(c)]. Thus, the spectral estimation is done on a set of 26 patches, different from the set used for training. The computation is performed on Itest, which is a [6×26] matrix containing the averaged camera responses, as

Eq. (5)

R^test=M^Itest.

To quantify the reconstruction error, we compute the goodness of fit coefficient (GFC) for each estimated patches with reflectance vector r^test. This gives a scalar value, computed as follows:

Eq. (6)

GFC=11Nwrtestr^test,
where Nw=36 is the total number of samples in the considered wavelength range and . is the two-norm of a vector. A GFC of 0 indicates a bad fit, whereas 1 is a perfect fit.

Figure 4(e) shows the comparison between the estimated and measured (reference) reflectance spectra of the 26 patches of the creative enhancement chart. We can see that the estimated spectra are close to the reference data. The worst GFC values occur especially when high reflectances are encountered (patches 1, 7, and 8 from the first line of patches). This can be due to the relatively low sensitivities (and thus a low SNR) of our system in this range of wavelengths.

As a comparison, we computed the GFC on the test chart using either three channels (s=Rbg,Gbg,Bbg) or six channels. We found that our six-channel system increases the GFC by an amount of 0.16 in average over the 26 patches.

4.

Data Transform and Database

4.1.

Data Transform

Here, we transform the calibrated data using three selected representations: (1) Stokes formalism, (2) reflectance computed from polarization filtered intensities, and (3) color image.

4.1.1.

Stokes

The Stokes parameters are determined from an average of intensity measurements over area, wavelength, and solid angle. If we consider a multispectral system with relatively narrow bands, the spectral dependence of polarization information can be considered to be an additional useful information so that each channel s senses a different Stokes vector.49 Thus, from ρj, we estimate the Stokes vector S at each pixel position and for each spectral channel s as

Eq. (7)

Ss=[S0,sS1,sS2,sS3,s]=[S0,s=Isρs,0ρs,90ρs,45ρs,1350],
where Is=S0,s is the intensity component computed from Eq. (3), S1,s is the difference between intensities measured through 0 deg and 90 deg polarizers, and S2,s is the difference of intensities through 45 deg and 135 deg polarizers. S3,s is not considered in this work because we can sense only linear polarization.

The degree of linear polarization (DOLP) represents the amount of linear polarization in the light beam. It takes a value between zero for nonpolarized light and one for totally linearly polarized light, with intermediate values referring to partial polarization

Eq. (8)

DOLPs=S1,s2+S2,s2Is.

The azimuth angle of linear polarization (AOLP) is also computed from the Stokes components. It represents the angular orientation of the main axis of the polarization ellipse with respect to the chosen angular reference used for the system:

Eq. (9)

AOLPs=12arctan(S2,sS1,s).

In Fig. 5, we plot the luminance and DOLPGy of the 24 MCCPP patches used for spectral calibration. First, we can see that there is a low but significant polarization signature for the patches. Second, we can see that we have an inverse relationship between luminance and DOLP, especially by looking at the values of the last six neutral patches (19 to 24). It can be explained by the fact that darker patches have a much smaller diffuse component (and thus a higher specular component, which is more polarized) due to a greater absorption of light. This effect is also shown in Fig. 6, in which darker balls exhibit a higher degree of polarization.

Fig. 5

(a) Luminance and (b) DOLPGy plot of the 24 MCCPP training samples.

OE_61_4_043104_f005.png

Fig. 6

(a)–(c) Stokes components S0,s, S1,s, and S2,s for spectral band s=Gbg. (d)–(e) DOLPs and AOLPs for spectral channel s=Gbg.

OE_61_4_043104_f006.png

A visualization of the Stokes transform on the scene called “resin balls” is shown in Fig. 6.

4.1.2.

Reflectance

The reflection of light upon a surface can be analyzed by an additive model, in which the total intensity is a diffuse component (resulting from the subsurface reflection phenomenon) plus a specular component (resulting from the surface reflection). Existing methods for separating reflection components use a polarization filter rotated in front of the image sensor.8,50 It is often assumed that the diffuse component tends to be weakly polarized compared with the specular component. In our case, we believe that the polarization intensity filtering can benefit the reflectance estimation when a significant specular reflection occurs.

To this end, we compute the polarization filtered intensities,51 called Us, by the following equation:

Eq. (10)

Us=Is(S1,s)2+(S2,s)2.

This has the effect of removing the polarization component from the total intensity Is. Thus, we can compute the spectral reflectance for any pixel position from vector U instead of vector I [as in Eq. (5)]:

Eq. (11)

r^=M^U.

To verify the effect of removing the polarized intensity component, we compute the reflectance using either I or U. We selected 5×5  pixel regions near occluding boundaries of the objects, where specular reflection is assumed to have a greater influence on the DOLP of the reflected light. Results are shown in Fig. 7 for four regions of interest. It appears that we get a better fit when using U for the four cases.

Fig. 7

Measured and estimated reflectance of four samples using both spectral images computed from S0 and U [Eqs. (2) and (11)], respectively. Regions of interest (RoIs) is patch of size 5×5 from where the spectral estimation curves are averaged. (a) RoI of controller scene, (b) spectral estimation of controller RoI, (c) RoI of Plier scene, (d) spectral estimation of Plier RoI, (e) RoI of Chart_b scene, (f) spectral estimation of Chart_b RoI, (g) RoI of Toy_2 scene, and (h) spectral estimation of Toy_2 RoI.

OE_61_4_043104_f007.png

4.1.3.

Color

From by pixel reflectance data r^, we compute the corresponding International Commission on Illumination (CIE) XYZ values as

Eq. (12)

[XYZ]=Ctr^,
where C (36×3) are the CIE (1931) color matching functions (CMFs).52,53 Then, we convert the CIE XYZ values to sRGB54 for visualization. The XYZ to sRGB transform is typically achieved in two steps: a first linear transform of XYZ values to linear RGB values, followed by a gamma correction of 2.2 to adapt for nonlinear behavior of monitors.

4.2.

Database

We provide a database of 28 scenes with different materials, including plastic, wood, and metal. The scene called “chart” is custom plastic patches manufactured with a 3D printer, with a varying layer height parameter. Figure 8 shows the database in sRGB representation, after the color transform from Sec. 4. The brand logos are removed from the database images.The images are cropped to select the relevant content of the objects.

Fig. 8

sRGB visualization of the database after data transform to sRGB. The 28 scenes are captured by the hardware system depicted in Sec. 2. (a) Mini balls, (b) plastic bottle, (c) metallic bottle, (d) candies, (e) plastic chart blue, (f) plastic chart white, (g) plastic chart red, (h) scissors, (i) ColorChecker, (j) cutter, (k) electronics, (l) inkwell, (m) painting_1, (n) painting_2, (o) pens, (p) plastic, (q) pliers, (r) polarizers, (s) resin balls, (t) screw driver, (u) tape, (v) controller, (w) toys_1, (x) toys_2, (y) vernier scale, (z) wood 1, (aa) wood_2, and (ab) wood 2.

OE_61_4_043104_f008.png

The code and database folder is organized as follows:

  • 1. Data folder:

    • a. Pairs of preprocessed images for each scene, after demosaicking, flat-field correction, and geometrical correction. They are multipage.tif files (one page per polarization angle).

  • 2. Misc folder:

    • a. Spectral calibrated matrix and

    • b. CIE XYZ values.

Three MATLAB scripts are provided to transform the data to Stokes, reflectance, and color images, respectively. The dataset and code are available in a GitHub repository via this link55 or can be sent on request to the corresponding author.

5.

Discussion

Our stereo setup offers the possibility to perform reflectometry by fusing observations of the polarization state of light reflected off the surface from two different views.51 In a binocular stereo configuration, the specular intensity can be different from the two sensors, depending on the shape of the object to be considered and the lightning configuration. As an example, our stereo setup can be configured to optimize the roughness measurement by modifying the vergence, the working distance, or the focal length of the camera lenses. In the database, this is unconstrained because surfaces in the scene are multiple and unknown. But in the case of measurement on samples with a priori knowledge (roughness range, convexity shape, and index of refraction), the setup can be slightly adapted.56 This is not investigated in this work and will be considered in future work.

Some drawback can come from the disparity computation, especially when imaging thick objects. From the stereo point of view, typical problems of matching occur when (1) there are occlusions (no corresponding data in one of the two images), (2) there are specular highlights, or (3) a pixel pertains to a more or less flat region. To be successful, the matching algorithm needs textures (but not periodic); if there is nothing to really match inside an image pair, the algorithm cannot make correspondences efficiently. Another potential factor comes from the use of an image pair taken with different spectral sensitivities.57 This produces a noisy disparity map and thus noisy Stokes/spectral/color representations of data. We believe that an evolved disparity map computation method that is spectrally invariant is still to be developed.

From our knowledge, computing the reflectance spectra from polarimetric Stokes data (S1 and S2) has never been investigated in the literature. We verified that polarization information can help the reflectance estimation, whereas previous work assumed that all of the surfaces measured by a spectral camera are mainly diffuse. Some other works used a rotated linear polarizer in front of the spectral camera to filter globally the specular reflection. In our work, this is done by pixel because we have the measurement of the state of polarization for each pixel. Nevertheless, this method is limited by the amount of polarization intensity that is able to be filtered, which varies with the object and lightning configuration, i.e., the angle of incidence/reflection of the light. We believe that a dedicated separation of diffuse/specular components using spectral and polarization, such as in a prior work,8 can further improve the reflectance estimation for specular surfaces. This is not included in this work but can be investigated as an additional processing block to be included in the framework from Fig. 3.

This work opens perspectives in several application fields, from the stereo acquisition of spectropolarimetric images to new image processing of the data. The output of our framework, which is transformed data (Stokes, reflectance, and color) can be extended to specific surfaces and be an input to computer vision tasks such as feature extraction using principal component analysis, semantic segmentation, or defect detection.

6.

Conclusion

We developed a snapshot spectropolarimetric acquisition system. It has the advantages of a high spatial resolution and a relatively good spectral reconstruction precision. We designed a full processing pipeline that consists of preprocessing, geometric calibration, spectral calibration, and data transform. We also provide a method to better estimate the reflectance of specular surfaces using a filtering process based on the Stokes components. A database of 28 high-resolution spectropolarimetric images is made available online, along with the code to transform the data.

The configuration of the proposed system makes it possible to reconstruct the 3D spectral information using two complementary strategies: (1) the polarization information (the so-called “shape from polarization” method) or (2) the stereo depth computation. Image processing using both polarization and stereo information is to be investigated in future work.

Acknowledgments

The authors want to thank Joël Lambert for the manufacturing of the plastic charts. This work was supported by the ANR JCJC SPIASI project, Grant No. ANR-18-CE10-0005 of the French Agence Nationale de la Recherche. The authors declare no conflicts of interest.

References

1. 

R. Shrestha, J. Y. Hardeberg and A. Mansouri, “One-shot multispectral color imaging with a stereo camera,” Proc. SPIE, 7876 787609 (2011). https://doi.org/10.1117/12.872428 PSISDG 0277-786X Google Scholar

2. 

R. S. Berns et al., “Multispectral-based color reproduction research at the Munsell Color Science Laboratory,” Proc. SPIE, 3409 14 –25 (1998). https://doi.org/10.1117/12.324139 PSISDG 0277-786X Google Scholar

3. 

S. Sattar, H. A. Khan and K. Khurshid, “Optimized class-separability in hyperspectral images,” in IEEE Int. Geosci. and Remote Sens. Symp., 2711 –2714 (2016). https://doi.org/10.1109/IGARSS.2016.7729700 Google Scholar

4. 

P.-J. Lapray et al., “Multispectral filter arrays: recent advances and practical implementation,” Sensors, 14 (11), 21626 –21659 (2014). https://doi.org/10.3390/s141121626 SNSRES 0746-9462 Google Scholar

5. 

J.-B. Thomas et al., “Spectral characterization of a prototype SFA camera for joint visible and nir acquisition,” Sensors, 16 (7), 993 (2016). https://doi.org/10.3390/s16070993 SNSRES 0746-9462 Google Scholar

6. 

M. Ewerlöf, M. Larsson and E. G. Salerud, “Spatial and temporal skin blood volume and saturation estimation using a multispectral snapshot imaging camera,” Proc. SPIE, 10068 1006814 (2017). https://doi.org/10.1117/12.2251928 PSISDG 0277-786X Google Scholar

7. 

J. Qin et al., “Hyperspectral and multispectral imaging for evaluating food safety and quality,” J. Food Eng., 118 (2), 157 –171 (2013). https://doi.org/10.1016/j.jfoodeng.2013.04.001 JFOEDH 0260-8774 Google Scholar

8. 

S. K. Nayar, X.-S. Fang and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis., 21 (3), 163 –186 (1997). https://doi.org/10.1023/A:1007937815113 IJCVEQ 0920-5691 Google Scholar

9. 

S. Tominaga and B. A. Wandell, “Standard surface-reflectance model and illuminant estimation,” J. Opt. Soc. Am. A, 6 (4), 576 –584 (1989). https://doi.org/10.1364/JOSAA.6.000576 Google Scholar

10. 

J. A. Martin and K. C. Gross, “Estimating index of refraction from polarimetric hyperspectral imaging measurements,” Opt. Express, 24 (16), 17928 –17940 (2016). https://doi.org/10.1364/OE.24.017928 Google Scholar

11. 

V. Thilak, D. G. Voelz and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt., 46 (30), 7527 –7536 (2007). https://doi.org/10.1364/AO.46.007527 APOPAI 0003-6935 Google Scholar

12. 

C. P. Huynh, A. Robles-Kelly and E. Hancock, “Shape and refractive index recovery from single-view polarisation images,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit., 1229 –1236 (2010). https://doi.org/10.1109/CVPR.2010.5539828 Google Scholar

13. 

G. A. Atkinson and E. R. Hancock, “Recovery of surface orientation from diffuse polarization,” IEEE Trans. Image Process., 15 (6), 1653 –1664 (2006). https://doi.org/10.1109/TIP.2006.871114 IIPRE4 1057-7149 Google Scholar

14. 

W.-C. Ma et al., “Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination,” in Proc. 18th Eurogr. Conf. Rendering Tech., 183 –194 (2007). Google Scholar

15. 

G. Horváth et al., Polarized Light in Animal Vision: Polarization Patterns in Nature, Springer Science & Business Media, Berlin, Heidelberg, New York (2004). Google Scholar

16. 

K. Homma et al., “Application of an imaging spectropolarimeter to agro-environmental sciences,” Proc. SPIE, 5234 638 –647 (2004). https://doi.org/10.1117/12.510676 PSISDG 0277-786X Google Scholar

17. 

H. Kurosaki et al., “Development of tunable imaging spectro-polarimeter for remote sensing,” Adv. Space Res., 32 (11), 2141 –2146 (2003). https://doi.org/10.1016/S0273-1177(03)90535-7 ASRSDW 0273-1177 Google Scholar

18. 

A. Pierangelo et al., “Multispectral mueller polarimetric imaging detecting residual cancer and cancer regression after neoadjuvant treatment for colorectal carcinomas,” J. Biomed. Opt., 18 (4), 046014 (2013). https://doi.org/10.1117/1.JBO.18.4.046014 Google Scholar

19. 

Y.-Q. Zhao et al., “Object separation by polarimetric and spectral imagery fusion,” Comput. Vis. Image Understanding, 113 (8), 855 –866 (2009). https://doi.org/10.1016/j.cviu.2009.03.002 CVIUF4 1077-3142 Google Scholar

20. 

Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Instant dehazing of images using polarization,” in Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit., (2001). https://doi.org/10.1109/CVPR.2001.990493 Google Scholar

21. 

D. M. Kocak et al., “A focus on recent developments and trends in underwater imaging,” Mar. Technol. Soc. J., 42 (1), 52 –67 (2008). https://doi.org/10.4031/002533208786861209 MTSJBB 0025-3324 Google Scholar

22. 

S. Sattar et al., “Review of spectral and polarization imaging systems,” Proc. SPIE, 11351 113511Q (2020). https://doi.org/10.1117/12.2555745 PSISDG 0277-786X Google Scholar

23. 

X. Tu et al., “Division of amplitude RGB full-stokes camera using micro-polarizer arrays,” Opt. Express, 25 (26), 33160 –33175 (2017). https://doi.org/10.1364/OE.25.033160 Google Scholar

24. 

T. Mu et al., “Snapshot linear-stokes imaging spectropolarimeter using division-of-focal-plane polarimetry and integral field spectroscopy,” Sci. Rep., 7 42115 (2017). https://doi.org/10.1038/srep42115 Google Scholar

25. 

C. Zhang, N. Quan and T. Mu, “Stokes imaging spectropolarimeter based on channeled polarimetry with full-resolution spectra and aliasing reduction,” Appl. Opt., 57 (21), 6128 –6134 (2018). https://doi.org/10.1364/AO.57.006128 APOPAI 0003-6935 Google Scholar

26. 

J. Kim and M. J. Escuti, “Demonstration of a polarization grating imaging spectropolarimeter (PGIS),” Proc. SPIE, 7672 80 –88 (2010). https://doi.org/10.1117/12.849758 PSISDG 0277-786X Google Scholar

27. 

J. Kim and M. J. Escuti, “Snapshot imaging spectropolarimeter utilizing polarization gratings,” Proc. SPIE, 7086 708603 (2008). https://doi.org/10.1117/12.795719 PSISDG 0277-786X Google Scholar

28. 

M. Alouini et al., “Near-infrared active polarimetric and multispectral laboratory demonstrator for target detection,” Appl. Opt., 48 (8), 1610 –1618 (2009). https://doi.org/10.1364/AO.48.001610 APOPAI 0003-6935 Google Scholar

29. 

M. Garcia et al., “Bio-inspired color-polarization imager for real-time in situ imaging,” Optica, 4 (10), 1263 –1271 (2017). https://doi.org/10.1364/OPTICA.4.001263 Google Scholar

30. 

C. Fu et al., “Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector,” J. Opt. Soc. Am. A, 32 (11), 2178 –2188 (2015). https://doi.org/10.1364/JOSAA.32.002178 Google Scholar

31. 

T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt., 52 2153 –2161 (2013). https://doi.org/10.1364/AO.52.002153 APOPAI 0003-6935 Google Scholar

32. 

F. Soldevila et al., “Single-pixel polarimetric imaging spectrometer by compressive sensing,” Appl. Phys. B, 113 (4), 551 –558 (2013). https://doi.org/10.1007/s00340-013-5506-2 Google Scholar

33. 

Sony, “Polarization image sensor,” (2018). Google Scholar

34. 

T. Okawa et al., “A 1/2inch 48M all PDAF CMOS image sensor using 0.8  μm quad Bayer coding 2×2OCL with 1.0lux minimum AF illuminance level,” in Int. Electron Devices Meeting, 16 –23 (2019). https://doi.org/10.1109/IEDM19573.2019.8993499 Google Scholar

35. 

C. S. Chun, D. L. Fleming and E. Torok, “Polarization-sensitive thermal imaging,” Proc. SPIE, 2234 275 –286 (1994). https://doi.org/10.1117/12.181025 PSISDG 0277-786X Google Scholar

36. 

A. Alsam, D. Connah and J. Hardeberg, “Multispectral imaging: How many sensors do we need?,” J. Imaging Sci. Technol., 50 (1), 45 –52 (2006). https://doi.org/10.2352/J.ImagingSci.Technol.(2006)50:1(45) JIMTE6 1062-3701 Google Scholar

37. 

R. S. Berns et al., “Practical spectral imaging using a color-filter array digital camera,” (2006). Google Scholar

38. 

J. Klein and B. Hill, “Multispectral stereo acquisition using two RGB cameras and color filters,” in 18th Workshop Farbbildverarbeitung, 89 –96 (2018). Google Scholar

39. 

M. Morimatsu et al., “Monochrome and color polarization demosaicking using edge-aware residual interpolation,” in IEEE Int. Conf. Image Process., 2571 –2575 (2020). https://doi.org/10.1109/ICIP40778.2020.9191085 Google Scholar

40. 

S. Westland, C. Ripamonti and V. Cheung, Computational Colour Science Using MATLAB, John Wiley & Sons, Inc.,, Hoboken, New Jersey (2012). Google Scholar

41. 

J. Y. Hardeberg, “Acquisition and reproduction of color images: colorimetric and multispectral approaches,” (2001). Google Scholar

42. 

Y. Giménez et al., “Calibration algorithms for polarization filter array camera: survey and evaluation,” J. Electron. Imaging, 29 (4), 041011 (2020). https://doi.org/10.1117/1.JEI.29.4.041011 JEIME5 1017-9909 Google Scholar

43. 

G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media, Inc., Sebastopol, California (2008). Google Scholar

44. 

H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit., 807 –814 (2005). https://doi.org/10.1109/CVPR.2005.56 Google Scholar

45. 

D. H. Marimont and B. A. Wandell, “Linear models of surface and illuminant spectra,” J. Opt. Soc. Am. A, 9 1905 –1913 (1992). https://doi.org/10.1364/josaa.9.001905 Google Scholar

46. 

D. C. Day, “Filter selection for spectral estimation using a trichrmatic camera,” (2003). Google Scholar

47. 

G. G. Stokes, On the Composition and Resolution of Streams of Polarized Light from different Sources, 3 233 –258 Cambridge University Press, Cambridge (2009). Google Scholar

49. 

G. Myhre et al., “Liquid crystal polymer full-stokes division of focal plane polarimeter,” Opt. Express, 20 27393 –27409 (2012). https://doi.org/10.1364/OE.20.027393 OPEXFF 1094-4087 Google Scholar

50. 

L. B. Wolff and T. E. Boult, “Constraining object features using a polarization reflectance model,” IEEE Trans. Pattern Anal. Mach. Intell., 13 (7), 635 –657 (1991). https://doi.org/10.1109/34.85655 Google Scholar

51. 

J. Riviere et al., “Polarization imaging reflectometry in the wild,” ACM Trans. Graph., 36 (6), 1 –14 (2017). https://doi.org/10.1145/3130800.3130894 ATGRDF 0730-0301 Google Scholar

52. 

J. Guild and J. E. Petavel, “The colorimetric properties of the spectrum,” Philos. Trans. R. Soc. London Ser. A, 230 (681-693), 149 –187 (1931). https://doi.org/10.1098/rsta.1932.0005 Google Scholar

53. 

W. D. Wright, “A re-determination of the trichromatic coefficients of the spectral colours,” Trans. Opt. Soc., 30 141 –164 (1929). https://doi.org/10.1088/1475-4878/30/4/301 Google Scholar

54. 

“Multimedia systems and equipment—colour measurement and management - part 2-1: Colour management—default RGB colour space—SRGB standard,” Geneva, CH (1999). Google Scholar

55. 

“Snapshot spectropolarimetric imaging using a pair of filter array cameras,” https://www.ensisa.uha.fr/foti-image-databases-polarisation-open#OE2022 Google Scholar

56. 

S. Nayar, K. Ikeuchi and T. Kanade, “Surface reflection: physical and geometrical perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., 13 (7), 611 –634 (1991). https://doi.org/10.1109/34.85654 ITPIDJ 0162-8828 Google Scholar

57. 

R. Shrestha, “Multispectral imaging: fast acquisition, capability extension, and quality evaluation,” University of Oslo, (2014). Google Scholar

Biography

Sumera Sattar is a PhD student at the University of Haute-Alsace, France. She received her MS degree in electrical engineering from the Institute of Space and Technology, Pakistan, in 2016. Her current research interests include polarization, multispectral imaging, and machine learning.

Laurent Bigué received his engineering degree in physics from the Université de Strasbourg in 1992. He received his PhD in optical and electrical engineering from the Université de Haute-Alsace in 1996. He was appointed a professor at ENSISA (ECE Department of Université de Haute Alsace) in 2005 and has been the dean of ENSISA since 2012. His major research interests include optical signal processing, polarimetry, and optical metrology. He is a member of SFO, EOS, OSA, and SPIE.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Sumera Sattar, Pierre-Jean Lapray, Lyes Aksas, Alban Foulonneau, and Laurent Bigué "Snapshot spectropolarimetric imaging using a pair of filter array cameras," Optical Engineering 61(4), 043104 (22 April 2022). https://doi.org/10.1117/1.OE.61.4.043104
Received: 18 January 2022; Accepted: 30 March 2022; Published: 22 April 2022
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Polarization

Cameras

Optical filters

Reflectivity

Image filtering

Bandpass filters

Calibration

Back to Top