Open Access
3 April 2015 Model of thermal infrared image texture generation based on the scenery space frequency
Author Affiliations +
Abstract
Infrared texture is an important feature in identifying scenery. To simulate infrared image texture effectively at different distances, we propose a model of infrared image texture generation based on scenery space frequency and the image pyramid degradation principle. First, we build a spatial frequency filter model based on imaging distance, taking into account the detector’s maximum spatial frequency, and use the filter to process a “zero” distance infrared image texture. Second, taking into consideration the actual temperature difference of the scenery’s details due to variation of the imaging distance and the effect of atmospheric transmission, we compare the actual temperature difference with the minimum resolvable temperature difference of the thermal imaging system at a specific frequency and produce a new image texture. The results show that the simulated multiresolution infrared image textures produced by the proposed model are very similar (lowest mean square error=0.51 and highest peak signal-to-noise ratio=117.59) to the images captured by the thermal imager. Therefore, the proposed model can effectively simulate infrared image textures at different distances.

1.

Introduction

Infrared texture is an important feature in identifying scenery and has been used in various applications such as target detection, precision guidance, and three-dimensional scene simulation.13 Infrared texture generation has been studied for decades, but because of security considerations, progress on the topic was seldom reported in the public literature.

The few published papers on infrared texture reveal the two methods used to generate infrared image texture: infrared texture simulation based on visible light texture46 and the random field model.710 The former method uses Planck’s equation to calculate the infrared radiation energy for each object in the scene and then the energy value is mapped using a specific gray level. The deviation of the specific gray level is computed by the variations of gray in the visible image. The final infrared image texture is obtained using the specific gray level and its deviation. This simulation method can be adapted for a large-scale scene that needs only a low amount of detail, but it is not suitable for a scene that requires a high amount of detail because the infrared and visible textures have different principles of formation. The other simulation method based on a random field model, e.g., long correlation models7 and the Markov random field model,810 can generate infrared image texture. However, this method requires a large number of model parameter tests to determine the proper parameters, and this method is highly complex and has low fidelity.

To simulate infrared image texture at different distances, the simulated image is transformed by zooming in and out. The two simulation methods mentioned above do not take into consideration the attenuation of high frequency and the variation in the temperature difference of the scenery detail due to the atmospheric effect on transmission and different distances. The transformation of the simulated images obtained by the two methods discussed above is not reliable when the distance changes. Based on the image multiresolution pyramid principle, we propose an infrared image texture generation model based on scenery spatial frequency to generate infrared image texture at different distances. First, we calculate the scenery spatial frequency at a specific distance using the Nyquist frequency of the detector, and then we use the calculated scenery spatial frequency as the cut-off frequency to build a filter model based on distance. We use the filter to process the “zero”-distance infrared image texture captured by the thermal imager and downsample the filtered image. Second, given that the actual temperature difference corresponding to different scenery texture details will change with a change in distance due to the atmospheric transmission effect, we compare the changed temperature difference with the minimum resolvable temperature difference (MRTD) of the thermal image system. The comparative results are used to build a filter based on MRTD to decide whether the frequency should be recognized. Finally, after performing the above two steps of the filtering process, we obtain the final image texture.

Section 2 introduces the infrared image texture model based on scenery spatial frequency, Sec. 3 presents the experimental results and discussion, and Sec. 4 gives the conclusion of the paper.

2.

Thermal Infrared Image Texture Generation Model Based on Scenery Spatial Frequency

2.1.

Frequency Pyramid Principle of Imaging

An image pyramid is a series of images arranged in a pyramidal structure, which is effective in multiresolution image representation (Fig. 1). The size and resolution of the images gradually decrease from the bottom image to the top image of the pyramid. The size of the base layer J (the original image) is N×N or 2J×2J, where J=log2N. The size of peak layer 0 is 1×1, i.e., a single pixel. The size of a layer j is 2j×2j, where 0jJ. Therefore, a multiresolution pyramid is formed by starting with the N×N size of the original image and the image size of each successively smaller layer is an integral multiple of 2.

Fig. 1

Image pyramid.

OE_54_4_043102_f001.png

In a photoelectric imaging system with a fixed number of pixels, when the distances change, the imaging process becomes a series of multiresolution displays. Therefore, the generation of infrared image textures at different distances is equivalent to the formation of an image pyramid: the “zero”-distance infrared image is the bottom image in the pyramid and has the highest resolution. The effects of distance and atmospheric transmission on the scenery infrared textures are equivalent to low-pass filtering and the process of downsampling in the image pyramid. A series of infrared image textures of different sizes and resolutions can be obtained by repeated filtering and downsampling. The filters are based on distance and MRTD. All of the filtering processes act on the “zero”-distance infrared image.

2.2.

Spatial Frequency Filter Based on Distance

The results of the scenery imaging on the detector are shown in Fig. 2, where h and w are the height and width of the scenery, O is the optic center, f0 is the focal length of the infrared imaging system, and ph×pw and ph×pw are the image size at distance L0 and L, respectively.

Fig. 2

Results of scenery imaging on the detector.

OE_54_4_043102_f002.png

2.2.1.

Frequency filter model based on distance

For the infrared imaging system with a fixed number of pixels, the ability to distinguish scenery details decreases with increasing distance. The cut-off frequency DL is the frequency that the detector can distinguish at distance L. The cut-off frequency determines the level of detail of the scenery at L and is calculated using L. Then a filter model based on the cut-off frequency is built and is used to process the “zero”-distance image. We call the filter a spatial frequency filter, denoted by HS and it is defined as

Eq. (1)

HS={1,fLDL0,fL>DL,
where fL is the spatial frequency of the “zero”-distance image and DL is the cut-off frequency of the image at distance L.

We apply the Fourier transform F(u,v) to the “zero”-distance image of size M×N:

Eq. (2)

F(u,v)=x=0M1y=0N1f(x,y)ej2π(ux/M+vy/N),
where f(x,y) is the gray value at (x,y) on the “zero”-distance image. Then, the filtered image G(u,v) in the frequency domain is calculated using the following equation:

Eq. (3)

G(u,v)=F(u,v)HS(u,v).

The spatial-domain image gp(x,y) is obtained by using the inverse Fourier transform of G(u,v) in the frequency domain:

Eq. (4)

gp(x,y)={real[ζ1[G(u,v)]]}(1)x+y,
where ζ1 is the inverse Fourier transform.

The image size (ph×pw) of the scenery at distance L is determined by the relationship between the location of the scenery and the detector, as shown in Fig. 2. The gp(x,y) is filtered again using the downsampled window:

w=[phph×col]×[pwpw×row],
where col and row are the number of columns and rows of the detector, respectively. We use gp(x,y) to denote the result of filtering gp(x,y). This filtered image is the simulated image when the detector is located at L and the atmospheric transmission effect is not taken into consideration.

2.2.2.

Image cut-off spatial frequency based on distance

The horizontal and vertical sample frequencies, fw and fh, respectively, of the detector are expressed as

Eq. (5)

fw=1/(2×dw),

Eq. (6)

fh=1/(2×dh),
where dw and dh are the width and height of the detector pixel. The imaging height (ph) and width (pw) on the detector at L are expressed as

Eq. (7)

ph=h×f0/L,

Eq. (8)

pw=w×f0/L,
where f0 is the focal length of the infrared imaging system and h and w are the height and width of the scenery. The cut-off spatial frequencies of the image at L are determined by the relationship between the scenery and the detector and are defined as follows:

Eq. (9)

DLh=ph/hdect×fh,

Eq. (10)

DLw=pw/wdect×fw,
where DLh and DLw are the vertical and horizontal cut-off spatial frequencies of the image at L and hdect and wdect are the height and width of the image on the detector plane.

2.3.

Thermal Infrared Image Texture Filter Based on MRTD

2.3.1.

Infrared image texture filter model based on MRTD

For scenery with a single spatial frequency f, such as a bar target, the atmospheric transmission affects the temperature difference between the target and the background. If the actual temperature difference is still greater than the MRTD(f) of the thermal imaging system after considering the atmospheric transmission, the thermal imaging system can distinguish the details of the frequency f. Otherwise, the details of f will not be distinguished and the image will become blurry. This yields the following formula:11

Eq. (11)

ΔT0·τ(L)MRTD(f),
where ΔT0 is the “zero”-distance temperature difference between the target and the background of the blackbody and τ(L) is the mean atmospheric transmittance along the direction from the detector to the target at L in the wave band of the thermal imaging system.

In reality, the scenery contains different levels of detail and the spatial frequency of the infrared image is a frequency range, not one fixed value. Therefore, it is necessary to calculate the actual temperature differences for the different spatial frequencies of the image at distance L. Comparing the actual temperature differences of different spatial frequencies and MRTD(f) is important to discriminate the details of the image with frequency f. If the thermal imaging system can distinguish the scenery details with a frequency f at distance L, it needs to meet the following condition:

Eq. (12)

ΔT(f)·τ(L)MRTD(f),
where ΔT(f) is the mean temperature difference for frequency f in the image, and τ(L) is as defined above and can be calculated using the program MODTRAN.11 A temperature filter model Ht based on the MRTD, according to Eq. (12), is defined as

Eq. (13)

Ht={1,ΔT(f)·τ(L)MRTD(f)0,ΔT(f)·τ(L)<MRTD(f).

We denote the Fourier transform of the filter result gp(x,y) as G(u,v) and use the filter based on MRTD to process it to obtain the final filtered image R(u,v) in the frequency domain:

Eq. (14)

R(u,v)=G(u,v)Ht(u,v).

To obtain the filtered image in the spatial domain, the inverse Fourier transform is applied to R(u,v):

Eq. (15)

Rp(x,y)={real[ζ1[R(u,v)]]}(1)x+y,
where Rp(x,y) is the final simulated image of the thermal infrared texture at distance L.

2.3.2.

Model of relationship between frequency distribution and temperature difference of scenery

For the “zero”-distance infrared image (L=L0), we can determine the temperature range (Tmin, Tmax) and can calculate the gray level range (Gmin, Gmax). The relationship between temperature and the gray values can be approximated by a linear relationship in a particular temperature range.12 Therefore, the temperature T in the “zero”-distance infrared image is defined as

Eq. (16)

T=GmaxGminTmaxTmin·G+Tmin,
where G is the pixel gray level. The temperature difference ΔTij of a given pixel (i,j) is defined as the temperature difference between the given point and its neighboring points:

Eq. (17)

ΔTij={p=11q=11[T(i,j)T(i+p,j+q)]}/9,
where T(i,j) is the temperature at pixel (i,j). The mean temperature difference ΔTavg of the whole image at distance L0 is given by

Eq. (18)

ΔTavg(f1)=i,j=1i,j=m,nΔTijmn,
where m and n are the pixel numbers of the “zero”-distance infrared image in the horizontal and vertical directions, respectively, and f1 is the highest spatial frequency of the infrared image at L0. The temperature difference between neighboring pixels has the highest frequency at L0. Therefore, the average temperature difference of the image at L0 corresponds to the highest frequency f1. Sceneries at different distances have different highest spatial frequencies, each of which is less than f1. For example, at the distance L=2L0, the highest spatial frequency for the scenery on the detector is f2=(1/4)f1, indicating that each pixel on the detector represents the average temperature of the four pixels in the “zero”-distance infrared image. Therefore, the temperature of a pixel in the infrared image at distance L=2L0 is

Eq. (19)

T(i,j)=[T(2i1,2j1)+T(2i1,2j)+T(2i,2j1)+T(2i,2j)]/4.

The average temperature difference of the image corresponding to the highest spatial frequency f2 is

Eq. (20)

ΔTavg(f2)=i,j=1i,j=m2,n2ΔTij/(mn/4).

Similarly, we can calculate all average temperature differences that correspond to different highest spatial frequencies f3, f4,, then draw the fitting curve for ΔTavg(fi) and fi using discrete values of the highest spatial frequencies and the average temperature differences. In this work, we used the exponential function to simulate the relationship between ΔTavg(fi) and fi:

Eq. (21)

ΔT(f)=aebf+cedf,
where a, b, c, and d are coefficients which are obtained by fitting curves of the relationship between the frequency distribution and the temperature difference of the scenery in the experimental step. Different “zero”-distance images have different coefficients.

2.3.3.

MRTD of the thermal imaging system

The MRTD13 of the thermal imaging system is expressed as

Eq. (22)

MRTD(f)=π2414NETD*SNRTMTF(f)(αβτdtefpΔf)1/2,
where NETD is the noise equivalent temperature difference, SNR is the signal-to-noise ratio, SNRT is the threshold of the SNR, α×β is the instant field angle of the optical system, τd is the residence time, fp is the frame frequency, te is the integral time of the eye, Δf is the noise equivalent bandwidth, and MTF(f) is the modulation transfer function of the thermal imaging system13 and is defined as

Eq. (23)

MTF(f)=MTFo×MTFe×MTFd,
where MTFo, MTFe, and MTFd are the modulation transfer functions of the optical system, the electronic circuit, and the detector in the thermal imaging system, respectively. More details about the modulation transfer functions are given in Ref. 13.

3.

Experimental Results and Discussion

We simulated the infrared image texture of scenery at different distances based on the “zero”-distance image. The “zero”-distance image was captured by the VarioCAMLong Wave Thermal Imaging System (InfraTec GmbH, Dresden, Germany). The parameters were as follows: resolution=240×320pixels, wave band=7.5 to 14μm, temperature detection range=40 to 1200°C, NETD=0.08°C, SNRT=2.8, fp=50Hz, te=0.2s, τd=1/fp, and Δf=π/(4*τd).

Two “zero”-distance images were collected on October 18, 2013, and were shown in Figs. 3(a) and 3(b). They were taken at 40 deg north latitude under a cloudy sky. In addition, there was haze that made visibility 0.5 km, and the atmospheric transmissivity was <0.7. Using the model of the relationship between the frequency distribution and the temperature difference of the scenery, we calculated five typical points of frequency and their corresponding average temperature differences. The fitting curves of the relationship are shown in Figs. 3(c) and 3(d). The coefficients of Eq. (21) for the curves in Figs. 3(c) and 3(d) are as follows: a1=0.1766, b1=0.8286, c1=0.176, d1=0.03884; a2=1.816, b2=0.6038, c2=1.118, d2=0.03606.

Fig. 3

(a) and (b) Two “zero”-distance infrared images and (c) and (d) the fitting curves of the relationship between the frequency and the average temperature difference for the images in (a) and (b), respectively.

OE_54_4_043102_f003.png

The infrared image textures shown in Fig. 4(a) were simulated as follows. First, we determined the distance of the simulated image; we assumed that it was 5 m. Second, we applied the spatial frequency filter based on distance and downsampled the “zero”-distance infrared image [Fig. 4(a)] using Eq. (3); the experimental results are shown in Figs. 4(b) and 4(c) in frequency and spatial domains, respectively. Finally, we used the infrared texture image filter based on MRTD from Eq. (14) to process the filtered image shown in Fig. 4(c); the result is shown in Figs. 4(d) and 4(e).

Fig. 4

Infrared image texture generation procedure based on a “zero”-distance infrared image at 5 m. (a) “Zero”-distance infrared image, (b) result of filter in frequency domain based on distance, (c) result of filter in spatial-domain based on distance, (d) result of filter in frequency domain based on MRTD, and (e) result of filter in spatial-domain based on MRTD.

OE_54_4_043102_f004.png

We found that the image in Fig. 4(c) is fuzzier and smaller than that in Fig. 4(a), and the image in Fig. 4(e) is fuzzier than that in Fig. 4(c). Some details are attenuated because of the atmospheric transmission effect.

Figure 5 compares the simulated image with the infrared image captured by the thermal imager (real infrared image) when the subject was 5 m from the imager. To compare the two images directly and analyze the simulation, the simulated image was extended to the whole field of view. Both images [Figs. 5(a) and 5(b)] are relatively similar from a subjective point of view. The slight discrepancy between the two [Fig. 5(c)] is caused mainly by the nonconformity of scenery locations in the two images. The location of the object in the infrared image is not always just centered in the entire field of view, so in the simulation image, the nonconformity is caused.

Fig. 5

Comparison of (a) the image at 5 m captured by the thermal imager, (b) the simulated image, and (c) image showing the discrepancy between (a) and (b).

OE_54_4_043102_f005.png

Figure 6 shows the histograms14 of the infrared image captured by the thermal imager [Fig. 5(a)] and the simulated image [Fig. 5(b)]. Figures 6(a) and 6(b) are the whole histograms and Figs. 6(c) and 6(d) are the histograms in the gray-level range of 0 to 100 for the infrared image and the simulated image, respectively. The histograms in Figs. 6(a) and 6(b) have a peak value between 0 and 255 gray levels. The histograms in Figs. 6(c) and 6(d) show that the infrared image and the simulated image have similar distributions of gray levels.

Fig. 6

(a) Whole histogram of image captured by the thermal imager. (b) Whole histogram of simulated image. (c) Histogram in the gray-level range of 0 to 100 for image captured by the thermal imager. (d) Histogram in the gray-level range of 0 to 100 for the simulated image.

OE_54_4_043102_f006.png

The simulated images and real infrared images at 10, 15, and 20 m are presented in Fig. 7. The details of the simulated images and the real infrared images decrease with increasing imaging distance. The simulated image has a texture similar to that of the real infrared image when the imaging distances of the two images are the same.

Fig. 7

Comparison of real (top panels) and simulated (bottom panels) images at distance (a) and (d) 10 m, (b) and (e) 15 m, and (c) and (f) 20 m.

OE_54_4_043102_f007.png

Figure 8 presents the real infrared image [Fig. 3(b)] and the simulated images of the grass at different distances. The “zero”-distance captured image [Fig. 8(a)] is of a patch of grass 0.6-m wide and 0.45-m high. We used the proposed filter model to process the “zero”-distance infrared image at different distances to obtain the simulated infrared texture images. The simulated images should be the entire field of view, so the texture-matching technology based on the sample plot is adapted to each simulated image; the simulated results are shown in Figs. 8(b)8(f). The figures show that as the distance increased, the details gradually became blurrier. These changes reflect the variations in the details of the scenery infrared texture at different distances.

Fig. 8

(a) The “zero”-distance infrared image captured by the thermal imager. (b)–(f) The simulated images of the grass scenery at 1, 2, 3, 5, and 10 m.

OE_54_4_043102_f008.png

Mean square error (MSE) and peak signal-to-noise ratio (PSNR) are often used as the evaluation indices15 to compare the similarity of two images. In general, if the PSNR>20, there is a strong similarity between the two images.15 The similarity indices of the captured images and simulated images at different distances are presented in Table 1.

Table 1

Indices used to evaluate the similarity of the captured images and the simulated images at different distances.

Distance (m)Mean square error (MSE)Peak signal to noise ratio (PSNR)
553.095271.1044
109.862187.9383
153.328798.7994
200.5087117.5848

The results in Table 1 show that when the distance increases, the MSE decreases and the PSNR increases, indicating that the similarities increase as the distance increases. All the PSNR values in this study were greater than 20, so the captured image and simulated images are very similar when the distance is between 5 and 20 m. The small MSE values and the large PSNR values in Table 1 suggest that the proposed filter model has high fidelity and is valid.

This study has one limitation, i.e., the proposed model was tested on only two thermal images, that of the person and the grass. However, we limited the number of images for three reasons. First, the performance of the model in simulating scenery depends on the imaging distance and viewing direction, not on the object in the scenery. Second, the experimental images of the person and the grass show the degradation of the image and the variation in the texture detail that occur when the imaging distance changes. We verified with the two experimental images that the proposed model is valid for the scenery simulation in which the “zero”-distance infrared image of the scenery is obtained by the perpendicular shoot to the scenery (i.e., the grass was shot from above, whereas the person was shot horizontally), but it is not valid for the scenery simulated from different viewing directions. Therefore, we did not use more images to test the model from the vertical direction to the scenery. Third, experimental conditions that were more complex and more materials would have been necessary to capture additional thermal infrared images at different viewing directions and distances, e.g., we may have had to use unmanned drones. Therefore, when the experimental conditions are appropriate, we will consider capturing more images for our future work.

4.

Conclusion

Based on the principle of the multiresolution image pyramid, we proposed a new thermal infrared image texture generation model based on scenery spatial frequency. The model was based on a “zero”-distance infrared image. Two typical sceneries were simulated using the model, and the simulations were compared with the infrared image texture captured by a thermal imager. The experimental results validated the proposed model by showing that it can reflect the features of infrared image texture and the imaging principle at different distances. In conclusion, the proposed model is able to effectively simulate infrared images with textures on the large-scale background and can meet some of the requirements of qualitative analysis. In the future, we will capture and simulate sceneries from different directions and distances and use them to improve the robustness of the proposed model.

Acknowledgments

This research was supported by the National Ministries Pre-research Project under grant No. 110010202.

References

1. 

M. S. Allili, N. Baaziz and M. Mejri, “Texture modeling using contourlets and finite mixtures of generalized Gaussian distributions and applications,” IEEE Trans. Multimedia, 16 (3), 772 –784 (2014). http://dx.doi.org/10.1109/TMM.2014.2298832 ITMUF8 1520-9210 Google Scholar

2. 

A. Klein et al., “Incorporation of thermal shadows into real-time infrared three-dimensional image generation,” Opt. Eng., 53 (5), 053113 (2014). http://dx.doi.org/10.1117/1.OE.53.5.053113 OPEGAR 0091-3286 Google Scholar

3. 

X. Zhang, T. Z. Bai and F. Shang, “scene classification of infrared images based on texture feature,” Proc. SPIE, 7156 715626 (2009). http://dx.doi.org/10.1117/12.806945 PSISDG 0277-786X Google Scholar

4. 

X. P. Shao, J. Q. Zhang and J. Xu, “Study of modeling natural infrared textures,” J. Xi’an Univ., 30 (5), 612 –617 (2003). http://dx.doi.org/10.3969/j.issn.1001-2400.2003.05.010 Google Scholar

5. 

S. Chen and J. Y. Sun, “IR scene simulation based on visual image,” Infrared Laser Eng., 38 (1), 23 –30 (2009). http://dx.doi.org/10.3969/j.issn.1007-2276.2009.01.005 1007-2276 Google Scholar

6. 

S. Chen et al., “A new infrared texture generation method,” J. Dalian Marit. Univ., 36 (4), 103 –106 (2010). Google Scholar

7. 

J. Bennett and A. Khotanzad, “Modeling texture images using generalized long correlation models,” IEEE Trans. Pattern Anal. Mach. Intell., 20 (12), 1365 –1370 (1998). http://dx.doi.org/10.1109/34.735810 ITPIDJ 0162-8828 Google Scholar

8. 

R. Chellappa, S. Chatterjee and R. Bagdazian, “Texture synthesis and compression using Gaussian-Markov random field models,” IEEE Trans. Syst. Man Cybern., SMC-15 (2), 298 –303 (1985). http://dx.doi.org/10.1109/TSMC.1985.6313361 ITSHFX 1083-4427 Google Scholar

9. 

X. P. Shao et al., “Infrared texture simulation using Gaussian-Markov random fields,” Int. J. Infrared Millimeter Waves, 25 (11), 1699 –1710 (2004). http://dx.doi.org/10.1023/B:IJIM.0000047459.74083.fd IJIWDO 0195-9271 Google Scholar

10. 

X. P. Shao, C. M. Gong and J. Xu, “Infrared texture simulation using non-parametric random field model,” Proc. SPIE, 6787 67871C (2007). http://dx.doi.org/10.1117/12.749501 PSISDG 0277-786X Google Scholar

11. 

T. Z. Bai and W. Q. Jin, Principle and Technology of Optoelectronic Imaging System, 509 –518 Beijing Institute of Technology Press, Beijing (2006). Google Scholar

12. 

L. S. Zhang et al., “A radiometric calibration method of low temperature measurement about thermal infrared imager,” Chinese Patent No. 1,02,818,636 (2012).

13. 

T. Z. Bai, “Study of simulation and analogy of electro-optical imaging systems,” 57 –75 Beijing Institute of Technology, Beijing, (2001). Google Scholar

14. 

J. H. Chang, K. C. Fan and Y. L. Chang, “Multi-modal gray-level histogram modelling and decomposition,” Image Vision Comput., 20 203 –216 (2002). http://dx.doi.org/10.1016/S0262-8856(01)00095-6 IVCODK 0262-8856 Google Scholar

15. 

H. N. Li, “The study of digital 3D scene infrared imaging modeling and realization technology,” 93 –130 Beijing Institute of Technology, Beijing, (2010). Google Scholar

Biography

Hai-He Hu received her BS and MS degrees at the Electronic & Information Engineering School of the Henan University of Science and Technology in 2004 and 2007, respectively. She is now a PhD candidate in optical engineering at Beijing Institute of Technology. Her technical interests include infrared scene simulation, computer graphics, and image processing.

Ting-Zhu Bai received his PhD in 2001 and is currently a professor at the School of Optoelectronics at Beijing Institute of Technology. His major research interests include infrared scene simulation and thermal imaging technology. He is a fellow of SPIE.

Xiao-Xia Qu received her BS degree in optical information science and technology from Wuhan University of Technology in 2009. She is a PhD candidate in optical engineering at Beijing Institute of Technology. From September 2012 to September 2014, she studied at Ghent University (Belgium) as a participant in the joint PhD program between Beijing Institute of Technology and Ghent University. Her research interests include infrared image processing and medical image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hai-He Hu, Ting-Zhu Bai, and Xiao-Xia Qu "Model of thermal infrared image texture generation based on the scenery space frequency," Optical Engineering 54(4), 043102 (3 April 2015). https://doi.org/10.1117/1.OE.54.4.043102
Published: 3 April 2015
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Thermography

Infrared imaging

Infrared radiation

Image filtering

Thermal modeling

Spatial frequencies

Image processing

Back to Top