Imaging Components, Systems, and Processing

Transmission map estimation of weather-degraded images using a hybrid of recurrent fuzzy cerebellar model articulation controller and weighted strategy

[+] Author Affiliations
Jyun-Guo Wang, Shen-Chuan Tai

National Cheng Kung University, Institute of Computer and Communication Engineering, No. 1, University Road, Tainan City 701, Taiwan

Cheng-Jian Lin

National Chin-Yi University of Technology, Department of Computer Science and Information Engineering, No. 57, Sec. 2, Zhongshan Road, Taiping District, Taichung City 41170, Taiwan

Opt. Eng. 55(8), 083104 (Aug 12, 2016). doi:10.1117/1.OE.55.8.083104
History: Received April 19, 2016; Accepted July 20, 2016
Text Size: A A A

Open Access Open Access

Abstract.  This study proposes a hybrid of a recurrent fuzzy cerebellar model articulation controller (RFCMAC) and a weighted strategy for solving single-image visibility in a degraded image. The proposed RFCMAC model is used to estimate the transmission map. The average value of the brightest 1% in a hazy image is calculated for atmospheric light estimation. A new adaptive weighted estimation is then used to refine the transmission map and remove the halo artifact from the sharp edges. Experimental results show that the proposed method has better dehazing capability compared to state-of-the-art techniques and is suitable for real-world applications.

Figures in this Article

Weather conditions can severely limit visibility in outdoor scenes. In such cases, atmospheric phenomena such as fog and haze will significantly degrade visibility in the captured scene. Since visibility is dependent on the air, the amount of particles in the air will affect image visibility. This phenomenon is generally composed of water droplets or particles and cannot be ignored. Both the absorption and scattering of light by particles and gases in the atmosphere cause the visibility to decrease, whereas the scattering of particulate in visibility causes more serious damage than the absorption of light. As a result, distant object and part scenes are not visible. That is, the image loses contrast and color fidelity, and the visual quality of the scene is reduced. In a visual sense, the quality of the degraded image is unacceptable. Therefore, a simple and effective image scene recovery method is essential. Image dehazing is a challenging problem, and image recovery technology has attracted the attention of many researchers. The low visibility in hazy images affects the accuracy of computer vision techniques, such as object detection, face tracking, license plate recognition, satellite imaging, and so on, as well as multimedia devices, such as surveillance systems and advanced driver assistance systems. Hence, haze removal techniques are important for improving the visibility of images. Restoring hazy images is a particularly challenging case that requires specific strategies. Therefore, widely varying methods have emerged to solve this problem. In recent years, enhancing images represents a fundamental task in many image processing and vision applications. Proposed strategies for enhancing the visibility of a degraded image include the following.

The first type is the nonmodule method, such as histogram equalization,1 Retinex theory,2 wavelet transform,3 and gamma correction curve.4 However, the shortcomings of these methods are that they seriously affect the clear region and also keep color fidelity less effectively.

The second type is the module method, which depends on the physical mode. Compared to the nonmodule method, these methods achieve better dehazing results by modeling scattering and absorption and by using multiple different atmospheric conditions in input images, such as scene depth,5,6 multiple images,79 polarization angles,10,11 and geometry models.12,13 Narasimhan and Nayar7,8 developed an interactive depth map for removing weather effects, but their method had limited effectiveness. Kopf et al.13 presented a novel deep photo system for using prior knowledge of the scene geometry when browsing and enhancing photos. However, the method required multiple images or additional information to get a better estimate of scattering and absorption, which limited its applications. Hautière et al.12 designed a method using weather conditions and a priori structure of a scene to restore the image contrast for vehicle vision systems.

A novel technique developed in Refs. 10, 14, and 15 exploited the partially polarized properties of airlight. The haze effect was estimated by using different angles of polarized filters to analyze the resulting images of the same scene. In other words, calculating the difference among these images enabled the use of the magnitude of polarization to estimate haze light components. Because the polarization light is not the major degradation factor, these methods have less robustness for scenes with dense haze.

Another recently developed strategy used a module and a single hazy image as input information. This approach has recently become a popular way of eliminating image haze by different strategies.1620 Roughly, these methods can be categorized as contrast-based and statistical approaches. An example of a contrast-based approach is the Tan17 method. In this case, the image restoration maximizes the local contrast while limiting the image intensity to be less than the global atmospheric light value. Tarel and Hautière19 combined a computationally effective technique with a contrast-based technique. Their method assumed that the depth map must be smooth except along edges with large depth jumps. The second category of statistical approaches includes the technique presented in Fattal,16 which employs a graphical model to solve ambiguous atmospheric light color and assumes the image shading and scene transmission are partially uncorrelated. According to this assumption, mathematical statistics were utilized to estimate the albedo of a scene and infer the transmission medium. The method provides a physically consistent estimation. However, because the variation of the two functions in Ref. 16 is not obvious, this method requires substantial fluctuation of color information and luminance in the hazy scene. He et al.18 developed a statistical approach for observing the dark channel and for roughly estimating the transmission map. Then, they refined the final depth map by using a relatively computationally expensive matting strategy.21 In this approach, pixels must be found through the entire image, which requires a long computation time. Nishino et al.20 used a Bayesian probabilistic concept by fully leveraging their latent statistical structures to estimate the scene albedo and depth from a single degraded image. A recent study by Gibson and Nguyen22 proposed a new image dehazing method based on the dark channel concept. Unlike the previous dark channel method, their method finds the average of the darkest pixels in each ellipsoid. However, this assumption in Ref. 22 may find several inaccurate pixels for those corresponding to bright objects. Fattal23 derived a local formation model that explains color lines in the context of hazy scenes and used the model to offset lines for recovering the scene transmission. In addition, Ancuti and Ancuti24 also proposed a fusion-based strategy for enhancing white balance and contrast in two original hazy image inputs. In other words, in order to keep the most significant detected features, the inputs in the fusion process are weighted by the specific calculation maps.

Recently, artificial neural networks (ANNs) have been widely used in many different fields. Research topics related to ANNs have proved suitable for many areas, such as control,25,26 identification,27,28 pattern recognition,29,30 equalization,31,32 and image processing.33,34 The cerebellar model articulation controller (CMAC) model proposed by Albus35,36 is usually applied in ANNs. The CMAC model imitates the structure and function of the cerebellum of a human brain and it is similar to a local network. The CMAC model can be viewed as a basis function network that uses plateau basis functions to compute the output of the model for a given input data point. Therefore, only the basis functions assigned to the hypercube covering the input data are needed. In other words, for a given input vector, only a few of the network nodes (or hypercube cells) are active and will effectively contribute to the corresponding network output. Thus, the CMAC has good learning and generalization capabilities. However, the CMAC requires a large amount of memory for solving the problem of the high dimension,37,38 is ineffective for online learning systems,39and has relatively poor function approximation ability.40,41 Another problem is that it is difficult to determine the memory structure, e.g., to adaptively select structural parameters, in the CMAC model.42,43 Recently, several researchers have proposed various solutions for the above problems, including fuzzy membership functions,44 selection of learning parameters,45 topology structure,46 spline functions,47 and fuzzy C-means.48 Fuzzy theory embedded in the CMAC model has been widely discussed. Thus, a fuzzy CMAC called FCMAC49 was proposed. It takes full advantage of the concept of fuzzy theory and combines it with the local generalization feature of the CMAC model.49,50 A recurrent network is embedded in the CMAC model by adding feedback connections with a receptive field cell to the CMAC model,51 which has the advantage of dynamic characteristics (considering past output network information). However, the above-mentioned methods have several drawbacks. For example, the mapping capability of local approximation by hyper-planes is not good enough, and more hypercube cells (rules) are required.

Therefore, this study developed a recurrent fuzzy cerebellar model articulation controller (RFCMAC) model to solve the above problems and to enable applications in widely various fields. A hybrid of the recurrent fuzzy CMAC and weighted strategy is used to process the image dehazing problem. The proposed method provides high-quality images and effectively suppresses halo artifacts. The advantages of the proposed method are as follows:

  1. The recurrent structure combines the advantages of local and global feedback.
  2. Many studies52,53 have considered only the past states in the recurrent structure, which is insufficient without referring to current states. In other words, the proposed method considers the correlation between past states and current states.
  3. Using the proposed method to determine the values of the transmission that map increases accuracy in selecting the average of the brightest 1% of atmospheric light, as atmospheric light.
  4. The proposed method applies a weighted strategy to generate a refining transmission map, thereby removing the halo effect.

The rest of this paper is structured as follows. Section 2 discusses the theoretical background of light propagation in such environments. In Sec. 3, we introduce the proposed RFCMAC and weighted strategy for image dehazing. Section 4 presents the experimental results and compares the proposed approach with other state-of-the-art methods. Finally, conclusions are drawn in Sec. 5.

Generally, a camera being used to take outdoor photographs obtains an image by the light of the receiving environment, such as the illumination of sunlight, reflecting light from a surface as shown in Fig. 1. Due to absorption and scattering, the light crossing the atmosphere is attenuated and dispersed. In physical terms, the number of suspended particles is low in sunny weather. Thus, the image quality is clear. In contrast, dust and water particles in the air during volatile weather scatter light, which severely degrades image quality. In such degraded circumstances, only 1% of the reflected light reaches the observer, and it causes poor visibility.54 McCartney55 also noted that haze is an atmospheric phenomenon. That is, the clear sky is obscured by dust, smoke, and other dry particles. In the captured image, the haze generates a distinctive gray hue, reducing visibility for the image. Based on the above, the physical theory of a hazy model can be expressed as Display Formula

I(x)=J(x)t(x)+A[1t(x)],(1)
where I is the observed image with haze and x=(x1,x2) denotes the observed RGB colors’ pixel coordinates. In Eq. (1), the hazy model consists of two main components, a direct attenuation and a veiling light (i.e., airlight). J(x) is the light reflected from the surfaces, or the haze-free image; t(x)0,1 represents the transmission values of reflected light. A is the atmospheric light. The first component J(x)t(x) represents the direct attenuation or the direct transmission of the scene radiance. That is, attenuation results from the interaction between scene radiance and particles during transmission. In other words, it corresponds to the reflected light of the surfaces in the scene and reaches the camera directly without being scattered. The other component A[1t(x)] expresses the real color cast of the scene due to the scattering of atmospheric light. t denotes the amount of light transmission between the observer and the surface. Assuming a homogenous medium, transmission t is, therefore, t(x)=eβd(x), where β is the medium attenuation coefficient and d represents the distance between the observer and the considered surface. Since transmission is inversely proportional to depth, this feature obtains image depth information without additional sensing devices. Therefore, only the transmission map and the color vector of atmospheric light are needed to eliminate the hazing effect in the image.

This section presents in detail our proposed method, which uses the RFCMAC model and a weighted strategy to recover scenes from the removal of a hazy image. Figure 2 shows the flowchart of the proposed method, and the details are presented in the following sections.

Graphic Jump Location
Fig. 2
F2 :

Flow diagram of the proposed dehazing algorithm.

Estimation of Transmission Map Features Using RFCMAC Model

The transmission map and atmospheric light have important roles in haze removal. Therefore, a good dehazing method with estimation of both the transmission map and the atmospheric light can appropriately process the recovery of a hazy image. Haze, which is generated by light attenuation, depends on the distribution of the number of particles in the air. According to Eq. (1), both the transmission map and the atmospheric light are important factors. Thus, the transmission factor and atmospheric lightness must be improved. This study proposes an RFCMAC model for estimating the transmission map more accurately. The RFCMAC model combines the traditional CMAC model, an interactive feedback mechanism, and a Takagi—Sugeno—Kang (TSK)-type linear function to obtain better solutions. The proposed model also adopts an interactive feedback mechanism, which has the ability to capture critical information from other hypercube cells. The structure of the RFCMAC and associated learning algorithm are presented as follows.

Structure of the RFCMAC model

The performance of the proposed RFCMAC model is enhanced by using an interactive feedback mechanism in the temporal layer and a TSK-type linear function in the subsequent layer. Figure 3 shows the six-layered structure of the RFCMAC model. The structure realizes a similar fuzzy IF–THEN rule (hypercube cell).

Graphic Jump Location
Fig. 3
F3 :

Structure of the RFCMAC model.

Rule j: Display Formula

IFx1isA1jandx2isA2jandxiisAijandxNDisANDjTHENyj=j=1Oj(4)(α0j+i=1NDαijxi),
where xi represents the i’th input variables, yj denotes the local output variables, Aij is the linguistic term using the Gaussian membership function in the antecedent part, Oj(4) is the output of the interactive feedback, and α0j+i=1NDαijxi is the basis TSK-type linear function of input variables. The operation functions of the nodes in each layer of the RFCMAC model are described as follows. For the following description, O(l) represents the output of a node in the l’th layer.

Layer 1 (input layer): The layer is used as an input feature vector x=(x1,x2,,  xND), and the inputs are crisp values. This layer does not require adjustments of weight parameters. Each node need only directly transmit input values to the next layer. The corresponding outputs are calculated as Display Formula

Oi(1)=ui(1),andui(1)=xi.(2)

Layer 2 (fuzzification layer): The layer performs a fuzzification operation and uses a Gaussian membership function to calculate the firing degree of each dimension. The Gaussian membership function is defined as follows: Display Formula

Oij(2)=exp[(ui(1)mij)2σij2],andui(2)=Oi(1),(3)
where mij and σij denote the mean and variance of the Gaussian membership function, respectively.

Layer 3 (spatial firing layer): Each node of this layer receives the firing strength of the associated hypercube cell by the node of a fuzzy set in layer 2. All layer two outputs are collected in layer three. Specifically, each node performs an algebraic product operation on inputs to generate spatial firing strength αj. The layer determines the number of hypercube cells in the current iteration. For each inference node, the output function can be computed as follows: Display Formula

Oj(3)=iNDuij(3),anduij(3)=Oij(2),(4)
where Π denotes product operation.

Layer 4 (temporal firing layer): Each node is a recurrent hypercube cell node, including the internal feedback (self-loop) and external interactive feedback loop. The output of the recurrent hypercube cell node depends on both the current spatial and previous temporal firing strengths. That is, each node refers to relative information from itself and other nodes. Because the self-feedback of the hypercube cell node is not sufficient to represent the all necessary information, the proposed model refers to relative information not only from the local source (node’s feedback from itself) but also from the global source (feedback from other nodes). The linear combination function of the temporal firing strength is described as follows: Display Formula

Oj(4)=k=1[λkjq·Ok(4)(t1)]+(1γjq)·uj(4),anduj(4)=Oj(3),(5)
where λkjq represents recurrent weights and determines the compromise ratio between the current and previous inputs to the network outputs. γjq=k=1NAλkjq and λkjq=RkjqNA(0Rkjq1) denote the interactive weights of the hypercube cells from itself and other nodes. Rkj  q is a connection weight from the k’th node to the j’th node and is a random value between 0 and 1. NA is the number of hypercube cells. Therefore, the compromise ratio between the current and previous inputs is between 0 and 1.

Layer 5 (consequent layer): Each node is a function of a linear combination of input variables in this layer. The equation is expressed as Display Formula

Oj(5)=Oj(4)(a0j+i=1NDaijxi).(6)

Layer 6 (output layer): This layer uses the centroid of area (COA) approach to defuzzify a fuzzy output into a scalar output. Then the actual output y is derived as follows: Display Formula

y=j=1NAOj(4)(a0j+i=1NDaijxi)j=1NAOj(4).(7)

Learning algorithm of the RFCMAC model

The proposed learning algorithm combines structure learning and parameter learning when constructing the RFCMAC model. Figure 4 shows a flowchart of the proposed learning algorithm. First, the self-constructing input space partition in structure learning is based on the degree measure used to appropriately determine the various distributions of the input training data. In other words, the firing strength in structure learning is used to determine whether a new fuzzy hypercube cell (rule) should be added to satisfy the fuzzy partitioning of input variables. Second, the parameter learning procedure performs the back propagation algorithm by minimizing a given cost function to adjust parameters. The RFCMAC model initially has no hypercube cell nodes except the input–output nodes. According to the reception of online incoming training data in the structure and parameter learning processes, the nodes from layer 2 to layer 5 are created automatically. Parameters Rkjq and aij in the initial model are randomly generated between 0 and 1.

Graphic Jump Location
Fig. 4
F4 :

Flowchart of the proposed structure and parameter learning.

Structure learning algorithm

Generally, the main purpose of structured learning is to determine whether a new hypercube cell should be extracted from the training data. For each incoming pattern xi, the firing strength of the spatial firing layer can be defined as the degree to which the incoming pattern belongs to the corresponding cluster. The entropy measure is used to estimate the distance between each data point and each membership function. Entropy values between data points and current membership functions were calculated to determine whether to add a new hypercube cell. The entropy measure can be calculated using the firing strength from uij(3) as follows: Display Formula

EMj=i=1NDijlog2Dij,(8)
where Dij=exp(uij(2)1) and EMj[0,1]. Based on Eq. (9), the criterion for the degree measure is used to estimate and generate a new hypercube cell of new incoming data x=(x1,x2,,xND). The maximum entropy measure is calculated as follows: Display Formula
EMmax=max1jNLEMj,(9)
where NL is the number of the hypercube cell and EM¯[0,1] is a prespecified threshold. In order to limit the number of hypercube cells in the proposed RFCMAC model, the threshold value will decay during the learning process. A low threshold leads to the learning of coarse clusters (i.e., a low number of hypercube cells are generated), whereas a high threshold leads to the learning of fine clusters (i.e., a high number of hypercube cells are generated). Therefore, the selection of the threshold value EM¯ critically affects the simulation results. That is, EM¯ determines whether the proper new hypercube cell is generated. Therefore, if EMmaxEM¯, then a new hypercube cell is generated. Otherwise, the hypercube cell is not added.

Parameter learning algorithm

Five parameters of the model are entered in the learning algorithm and optimized based on the training data. The parameter learning occurs concurrently with the structure learning. For each piece of incoming data, five parameters (i.e., mij, σij, a0, aij, and λkjq) are tuned in the RFCMAC model when the hypercube cells are newly generated or originally existed. Here, the gradient descent method is used to adjust the parameters of the receptive field functions and the TSK-type function. To clarify, consider the single-output case. The goal of the minimizing cost function E is described as Display Formula

E(t)=12[yd(t)y(t)]2,(10)
where yd(t) denotes the desired output and y(t) is the model output for each discrete time t. In each training cycle, from the starting input variables to the activity of the model output, y(t) are calculated by a feed-forward pass operation. According to Eq. (10), the error is used to regulate the weighted vector of the proposed RFCMAC model in a given number of training cycles. The well-known learning method of the backpropagation algorithm can be simplified as follows: Display Formula
W(t+1)=W(t)+W(t)=ΔW(t)+[ηE(t)W(t)],(11)
where η and W represent the learning rate and the free parameters, respectively. η denotes the pace factor for the learning rate in the search space. A low value may lead to a local optimal solution, whereas a high value leads to premature convergence that cannot obtain a better optimal solution. Therefore, the initial settings for α¯ and η are based on experience estimation. According to Eq. (10), with respect to an arbitrary weight vector, W is calculated by Display Formula
E(t)W=e(t)y(t)W.(12)

The corresponding antecedent and consequent parameters of the RFCMAC model are then adjusted using the chain rule to perform the error term recursive operation. With the RFCMAC model and the cost function as defined in Eq. (10), the update rule for aij can be derived as Display Formula

aij(t+1)=aij(t)+Δaij(t),(13)
where Display Formula
aij(t)=η·Eaij=η·Ey·yOj(5)·Oj(5)aij.(14)

The equations used to update the recurrent weight parameter λkjq cell are Display Formula

λkjq(t+1)=λkjq(t)+Δλkjq(t),(15)
where Display Formula
Δλkjq(t)=η·Eλkjq=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)λkjq=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2[Oj(4)(t1)αj],(16)
where η represents the learning rate of the recurrent λ for the fuzzy weight functions and is set between 0 and 1, and e denotes the error between the desired output and actual output, i.e., ydy.

mij and σij represent the mean and variance of the receptive field functions, respectively. The adjustable parameters of the receptive field functions are calculated by Display Formula

mij(t+1)=mij(t)+mij(t),(17)
and Display Formula
σij(t+1)=σij(t)+σij(t),(18)
where Display Formula
Δmij=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)Oj(3)·Oj(3)Oj(2)·Oj(2)mij=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2·(1γjq)·αj·2(ui(1)mij)σij2,(19)
and Display Formula
Δσij=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)Oj(3)·Oj(3)Oj(2)·Oj(2)σij=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2·(1γjq)·αj·2(ui(1)mij)2σij3,(20)
where i denotes the i’th input dimension for i=1,2,,n, and j denotes the j’th hypercube cell.

Weighted Strategy for Adaptively Refining the Transmission Map

In the real world, the transmission is not always constant within a window, especially around the contour of an object. In the inconstant regions, the image of the recovery scene generates some halos and block artifacts. The proposed solution is to use a pixel-window ratio (PWR) method to detect the possible regions of the halo artifact in the recovered scene and to use an adaptive weighting technique to mitigate the artifact. The PWR is defined as the ratio of the pixel itself and the 7×7 mask of the window. The PWR is derived by Display Formula

PWR=PTMWTM,(21)
where the numerator is the minimum channel by 1×1 mask for RGB color space and the denominator is the window transmission map (WTM) by 7×7 mask. A PWR value very close to 1 means that the transmission within the WTM is nearly constant. Although the halo situation cannot occur, the relative color saturation in the image is very high. In contrast, if the value of PWR is far >1, this means that the transmission within the window is inconstant and the halo artifact will occur. However, excessive color saturation is not a problem. Although the halo artifact region can be found by the value of PWR, the main problem is how to mitigate artifacts from these regions. The proposed solution to this problem is to use a weighted strategy approach to improve refinement of the transmission map and to mitigate the halo artifact. The weighted strategy approach is defined as follows: Display Formula
t={ω×[(1αPWR)×PTM+αPWR×WTM],if  PWR>  Tupperω×[(1βPWR)×PTM+βPWR×WTM],if  Tlower<PWRTupperω×WTM,otherwise(22)
where α and β are the weighting factors for mitigating the artifacts. The range of α and β is set as 0<α<β<1. In Eq. (22), if the PWR value is greater than Tupper, it means that the transmission is greatly different from the WTM, the weighting estimation of the WTM is decreased, and the weighting estimation of the PTM is increased. Therefore, this situation requires a very small weighting factor α to adjust the transmission rapidly so that the halo artifact can be eliminated. If the PWR value is between Tupper and Tlower, this means that the transmission is a little different from the WTM. For this situation, the weighting factor β is greater than the weighting factor α and it is applied to adjust the transmission smoothly. Otherwise, the WTM value is directly used as an estimation value. Parameter values α and β are based on computational analysis of the intensity values associated with the halos. Figure 5(a) shows the original hazy image and Figs. 5(b)5(j) show the results using the different values of α and β. Based on the above computational analysis, weighting factors α and β are set appropriately to improve the quality of the image dehazing.

Graphic Jump Location
Fig. 5
F5 :

(a) Original haze image; (b)–(j) the results using different α and β values, where (b) α=0.1, β=0.1; (c) α=0.1, β=0.5; (d) α=0.1, β=0.9; (e) α=0.5, β=0.1; (f) α=0.5, β=0.5; (g) α=0.5, β=0.9; (h) α=0.9, β=0.1; (i) α=0.9, β=0.5; and (j) α=0.9, β=0.9.

Atmospheric Light Estimation

The atmospheric light factor must be carefully selected for effective image dehazing. An incorrectly selected atmospheric light factor will obtain very poor dehazing results. In some situations, many objects are considered atmospheric light, which results in erroneous image restoration. To solve this problem, the proposed solution is to use an average value of the brightest 1% in the transmission t to refine the atmospheric light level. The average value is calculated as follows: Display Formula

Ac=pixelxpixelc|x|,(23)
where A is the atmospheric light and c is the color channel. Figure 6 shows the results of scene radiance recovery.

Graphic Jump Location
Fig. 6
F6 :

Estimation using an average value: (a) original image; (b) estimate of transmission map; (c) image of atmospheric light; and (d) scene radiance recovery.

Image Recovery

This section describes how both atmospheric light and transmission features in Secs. 3.2 and 3.3 are used as input factors in scene recovery. The scene radiance recovery step converts Eq. (1) into Eq. (24) to obtain the dehazed images. Therefore, scene J can be recovered as follows: Display Formula

J(x)=I(x)Amax[t0,t(x)]+A,(24)
where t0 is the lower bound of transmission and is set as 0.15. If a little haze exists in the recovered image, then this image will look more natural.

The experiments were performed in the C language on a Pentium(R) i7-3770 CPU @3.20 GHz. The effectiveness and robustness of the proposed method were verified by testing several hazy images, namely, “New York,” “ny12,” “ny17,” “y01,” and “y16”. The proposed approach was also compared with other well-known haze removal methods.13,16,1720,24 Performance testing was divided into three parts: (1) results of removing the halo, (2) assessment of the visual images, and (3) analysis of the quantitative measurement.

Results of Removing the Halo

Figure 7 shows the results of removing the halo for different images. In Fig. 7(a), the estimated transmission map is from an input hazy image using the patch size 7×7. Although the dehazing results are good, some block effects (halo artifacts) exist in the blue blocks of Fig. 7(a). The phenomenon is because the transmission is not always a constant value in a patch. In Fig. 7(b), the halo artifacts are suppressed by the proposed method in the red blocks. Therefore, the halo artifacts do not exist using the proposed method.

Graphic Jump Location
Fig. 7
F7 :

Removal of halo artifacts for different images. (a) Halo artifacts and (b) removal of the halo artifacts.

Estimation of the Visual Image

Figure 8 shows the comparison results. This figure shows that the dehazing results obtained by the proposed method are better than those of Fattal,16 Tarel and Hautière,19 and Ancuti and Ancuti.24 Additionally, Schechner and Averbuch14 adopted a multi-image polarization-based dehazing method that employs the worst and the best polarization states among the existing image versions. For a comparison with the method developed in Schechner and Averbuch,14 we processed only one input used in that study.r14 The dehazing results obtained by the proposed method are superior to those of Schechner and Averbuch.14

Graphic Jump Location
Fig. 8
F8 :

Comparison of dehazing results using various methods.

Figures 9 and 10 also show the comparison results for the proposed approach and other state-of-the-art methods. Figure 9 shows that, compared with the techniques developed by Tan17 and by Tarel and Hautière,19 the proposed method preserves the fine transitions in the hazy regions and does not generate unpleasing artifacts. Moreover, the techniques of Tan17 and Tarel and Hautière19 produce oversaturated colors. Although the technique developed by Fattal16 obtains good dehazing results, its applications are limited in dense haze situations. The poor performance mainly results from the use of a statistical analysis method that needs to estimate the variance of the depth map. The technique of Kopf et al.13 obtains a good result in the color contrast, but only a little detailed texture is presented in the image. The technique of He et al.18 gets an obvious color difference in some regions. Recently, the technique developed by Nishino et al.20 yields aesthetically pleasing results, but some artifacts are introduced in those regions, which are considered at infinite depth. The method developed by Ancuti and Ancuti24 obtains a natural image, but color differences are visible in some regions, such as objects. The proposed method can effectively perform hazing, halation, and color cast.

Graphic Jump Location
Fig. 9
F9 :

Comparison of dehazing techniques for city scene images: (a) ny12 and (b) ny17.

Graphic Jump Location
Fig. 10
F10 :

Comparison of dehazing techniques for mountain scene images: (a) y01 and (b) y16.

An image of a mountain was also used for comparison with other state-of-the-art methods. Figure 10 shows the dehazing results using various methods. Comparisons showed that the Tan method17 produces oversaturation phenomena and causes color differences and halo artifacts. A good color contrast is obtained by the Fattal16 method, but some differences in detailed textures and color differences are visible. The results of Kopf13 are similar to those of Fattal.16 Though Tarel and Hautière’s19 method has a good detailed texture, the color difference problem is generated. Because of color differences caused by oversaturation, the results obtained by the He et al.18 method are unnatural. The technique developed by Nishino et al.20 obtains a good overall image, but an unnatural phenomenon is visible in clouds in the sky. The technique of Ancuti and Ancuti24 performs well in terms of true color contrast; however, a slight unnatural phenomenon still occurs around the sky. Overall, the results obtained by the proposed method are superior to those of other methods.

Quantitative Measurement Results

A real-world quantitative analysis of image restoration is not easy to implement because a standard reference image has not been validated. Therefore, to demonstrate the effectiveness of the proposed algorithm compared to other image dehazing methods such as Tan,17 Fattal,16 Kopf et al.,13 Tarel and Hautière,19 He et al.,18 Nishino et al.,20 and Ancuti and Ancuti,24 this study employs two well-known quantitative metrics for analysis: the indicator assessment of S-CIELAB by Zhang and Wandell56 and the blind measure by Hautière et al.57

The S-CIELAB56 metric is used to estimate color fidelity in visual images because it incorporates the spatial color sensitivity of the eye and evaluates the color contrast between the restored image and the original image. Therefore, it obtains accurate predictions. The value of the color contrast is proportional to the S-CIELAB metric. If the S-CIELAB metric is small, the color contrast value is small; in contrast, if the S-CIELAB metric is large, the color contrast value is large. Table 1 shows the estimation results of color contrast using various methods.

Table Grahic Jump Location
Table 1Estimation results of color contrast using various methods.

The blind measure methodology57 calculates the ratio between the gradient of before and after image restoration. This calculation is based on the concept of visibility, which is commonly used in lighting engineering. This study considers four images for discussing, named as ny12, ny17, y01, and y16. Indicator e represents edges newly visible after restoration, and indicator r¯ represents the mean ratio of the gradients at visible edges. The blind measure is calculated as follows: Display Formula

e=nrnono,(25)
where nr and no are the number of visible edges in the restored image and the original image, respectively Display Formula
r¯=exp[1nrPirlogri],(26)
where r is the set of visible edges in the restored image, Pi is the i’th element of the corresponding set r, and ri denotes the i’th ratio between the gradient of the original image and the restored image.

Table 2 shows the performance of different algorithms with e and r¯. In Table 2, the edge newly visible after restoration (i.e., the e value) of the proposed method is larger than those of other methods,13,1618,20 whereas the r¯ value of the proposed method is smaller than that in the Tan17 and Tarel and Hautière19 methods. However, comparisons of the visual images show that both methods (i.e., Refs. 17 and 19) exhibit oversaturation and color contrast.

Table Grahic Jump Location
Table 2Performance of different algorithms with e and r¯.

The computation time of the proposed method was also compared with that of other state-of-the-art techniques. For this comparison, test images with an average size of 600×800 were used. The comparisons showed that the proposed method requires 4.5 s, the method of Tan17 needs >45  s, Fattal16 requires 35 s, the technique of Tarel and Hautière19 needs 8 s, and He et al.18 requires 20 s. Therefore, the proposed method has the shortest computation time.

Based on the above-mentioned analysis and comparison in Secs. 4.14.3, an efficient hybrid of the RFCMAC model and the weighted strategy is proposed for solving halo removal, color contrast enhancement, and computation time reduction.

The hybrid RFCMAC model and weighted strategy developed in this study effectively solve hazy and foggy images. The proposed RFCMAC model performs estimation of the transmission map and accurately selects the average of the brightest 1% of atmospheric light. An adaptively weighted strategy is applied to generate a refined transmission map for removing the halo effect. Experimental results demonstrate the superiority of the proposed method in enhancing color contrast, balancing color saturation, removing halo artifacts, and reducing computation time.

Stark  J. A., “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process.. 9, (5 ), 889 –896 (2000). 1057-7149 CrossRef
Rahman  Z., , Jobson  D. J., and Woodell  G. A., “Retinex processing for automatic image enhancement,” J. Electron. Imaging. 13, (1 ), 100 –110 (2004).CrossRef
Scheunders  P., “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process.. 11, (5 ), 568 –575 (2002). 1057-7149 CrossRef
Ancuti  C. O.  et al., “A fast semi-inverse approach to detect and remove the haze from a single image,” in  Proc. of the Asian Conf. on Computer Vision , pp. 501 –514 (2010).
Oakley  J. P., and Satherley  B. L., “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process.. 7, (2 ), 167 –179 (1998). 1057-7149 CrossRef
Tan  K. K., and Oakley  J. P., “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A. 18, (10 ), 2460 –2467 (2001).CrossRef
Narasimhan  S. G., and Nayar  S. K., “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.. 25, (6 ), 713 –724 (2003). 0162-8828 CrossRef
Schechner  Y. Y., , Narasimhan  S. G., and Nayar  S. K., “Polarization based vision through haze,” Appl. Opt.. 42, (3 ), 511 –525 (2003). 0003-6935 CrossRef
Pandian  P. S., , Kumaravel  M., and Singh  M., “Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation,” Med. Biol. Eng. Comput.. 47, (11 ), 1197 –1206 (2009). 0140-0118 CrossRef
Shwartz  S., , Namer  E., and Schechner  Y., “Blind haze separation,” in  2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’06) , pp. 1984 –1991 (2006).CrossRef
Schechner  Y., , Narasimhan  S., and Nayar  S., “Instant dehazing of images using polarization,” in  Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01) , pp. 325 –332 (2001).CrossRef
Hautière  N., , Tarel  J. P., and Aubert  D., “Towards fog-free in-vehicle vision systems through contrast restoration,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1 –8 (2007).CrossRef
Kopf  J.  et al., “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.. 27, (5 ), 1 –10 (2008). 0730-0301 CrossRef
Schechner  Y., and Averbuch  Y., “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (9 ), 1655 –1660 (2007). 0162-8828 CrossRef
Namer  E., , Shwartz  S., and Schechner  Y., “Skyless polarimetric calibration and visibility enhancement,” Opt. Express. 17, (2 ), 472 –493 (2009). 1094-4087 CrossRef
Fattal  R., “Single image dehazing,” ACM Trans. Graph.. 27, (3 ) (2008).CrossRef
Tan  R. T., “Visibility in bad weather from a single image,” in  Proc. IEEE Conf. Computer Vision and Pattern Recognition , pp. 1 –8 (2008).CrossRef
He  K., , Sun  J., and Tang  X., “Single image haze removal using dark channel prior,” in  Proc. IEEE Conf. Computer Vision and Pattern Recognition , pp. 1956 –1963 (2009).CrossRef
Tarel  J. P., and Hautiere  N., “Fast visibility restoration from a single color or gray level image,” in  Proc. IEEE Int. Conf. Computer Vision , pp. 2201 –2208 (2009).CrossRef
Nishino  K., , Kratz  L., and Lombardi  S., “Bayesian defogging,” Int. J. Comput. Vision. 98, (3 ), 263 –278 (2012). 0920-5691 CrossRef
Levin  A., , Lischinski  D., and Weiss  Y., “A closed form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell.. 30, (2 ), 228 –242 (2008). 0162-8828 CrossRef
Gibson  K., and Nguyen  T., “An analysis of single image defogging methods using a color ellipsoid framework,” EURASIP J. Image Video Process.. 2013, (37 ) (2013).CrossRef
Fattal  R., “Dehazing using color-lines,” ACM Trans. Graph.. 34, (1 ), 13  (2014). 0730-0301 CrossRef
Ancuti  C. O., and Ancuti  C., “Single image dehazing by multi-scale fusion,” IEEE Trans. Image Process.. 22, (8 ), 3271 –3282 (2013). 1057-7149 CrossRef
Xianzhong  C., and Shin  K. G., “Direct control and coordination using neural networks,” IEEE Trans. Syst., Man, Cybern.. 23, (3 ), 686 –697 (1993). 0018-9472 CrossRef
Wu  S., and Wong  K. Y. M., “Dynamic overload control for distributed call processors using the neural network method,” IEEE Trans. Neural Networks. 9, (6 ), 1377 –1387 (1998). 1045-9227 CrossRef
Yamada  T., and Yabuta  T., “Dynamic system identification using neural networks,” IEEE Trans. Syst., Man, Cybern.. 23, (1 ), 204 –211 (1993).CrossRef
Lu  S., and Basar  T., “Robust nonlinear system identification using neural-network models,” IEEE Trans. Neural Networks. 9, (3 ), 407 –429 (1998). 1045-9227 CrossRef
Perez  C.A.  et al., “Linear versus nonlinear neural modeling for 2-D pattern recognition,” IEEE Trans. Syst., Man, Cybern. A. 35, (6 ), 955 –964 (2005).CrossRef
Oong  T. H., and Isa  N. A. M., “Adaptive evolutionary artificial neural networks for pattern classification,” IEEE Trans. Neural Networks. 22, (11 ), 1823 –1836 (2011). 1045-9227 CrossRef
Nair  S. K., and Moon  J., “Data storage channel equalization using neural networks,” IEEE Trans. Neural Networks. 8, (5 ), 1037 –1048 (1997). 1045-9227 CrossRef
You  C., and Hong  D., “Nonlinear blind equalization schemes using complex-valued multilayer feedforward neural networks,” IEEE Trans. Neural Networks. 9, (6 ), 1442 –1455 (1998). 1045-9227 CrossRef
Yang  Y. S.  et al., “Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network,” IEEE Trans. Biomed. Eng.. 48, (6 ), 718 –730 (2001). 0018-9294 CrossRef
Ma  L., and Khorasani  K., “Facial expression recognition using constructive feedforward neural networks,” IEEE Trans. Syst., Man, Cybern. B. 34, (3 ), 1588 –1595 (2004).CrossRef
Albus  J. S., “A new approach to manipulator control: the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr.. 97, (3 ), 220 –227 (1975).CrossRef
Albus  J. S., “Data storage in the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr.. 97, (3), 228 –233 (1975).CrossRef
Lee  Z. J., , Wang  Y. P., and Su  S. F., “A genetic algorithm based robust learning credit assignment cerebellar model articulation controller,” Appl. Soft Comput.. 4, (4 ), 357 –367 (2004).CrossRef
Leu  Y. G.  et al., “Compact cerebellar model articulation controller for ultrasonic motors,” Int. J. Innovative Comput., Inf. Control. 6, (12 ), 5539 –5552 (2010).
Su  S. F., , Ted  T., and Huang  T. H., “Credit assigned CMAC and its application to online learning robust controllers,” IEEE Trans. Syst., Man, Cybern., B. 33, (2 ), 202 –213 (2003).CrossRef
Wu  J., and Pratt  F., “Self-organizing CMAC neural networks and adaptive dynamic control,” in  Proc. of the 1999 IEEE Int. Symp. on Intelligent Control/Intelligent Systems and Semiotics , pp. 259 –265 (1999).CrossRef
Commuri  S., and Lewis  F. L., “CMAC neural networks for control of nonlinear dynamical systems: structure, stability, and passivity,” Automatics. 33, (4 ), 635 –641 (1997).CrossRef
Hwang  K. S., and Lin  C. S., “Smooth trajectory tracking of three-link robot: a self-organizing CMAC approach,” IEEE Trans. Syst., Man, Cybern. B. 28, (5 ), 680 –692 (1998).CrossRef
Lee  H. M., , Chen  C. M., and Lu  Y. F., “A self-organizing HCMAC neural-network classifier,” IEEE Trans. Neural Networks. 14, (1 ), 15 –27 (2003). 1045-9227 CrossRef
Jou  C. C., “A fuzzy cerebellar model articulation controller,” in  Proc. IEEE Int. Conf. Fuzzy System , pp. 1171 –1178 (1992).CrossRef
Lane  S. H., and Militzer  J., “A comparison of five algorithm for the training of CMAC memories for learning control systems,” Automatica. 28, (5 ), 1027 –1035 (1992). 0005-1098 CrossRef
Lin  C. S., and Li  C. K., “A new neural network structure composed of small CMACs,” in  Proc. IEEE Conf. Neural Systems , pp. 1777 –1783 (1996).CrossRef
Reay  D. S., “CMAC and B-spline neural networks applied to switched reluctance motor torque estimation and control,” in  The 29th Annual Conf. of the IEEE Industrial Electronics Society , Vol. 1, , pp. 323 –328 (2003).CrossRef
Chen  S., and Zhangm  D., “Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure,” IEEE Trans. Syst., Man, Cybern. B. 34, (4 ), 1907 –1916 (2004).CrossRef
Su  S. F., , Lee  Z. J., and Wang  Y. P., “Robust and fast learning for fuzzy cerebellar model articulation controllers,” IEEE Trans. Syst., Man, Cybern. B. 36, (1 ), 203 –208 (2006).CrossRef
Wu  T. F., , Tsai  P. S., and Wang  L. S., “Adaptive fuzzy CMAC control for a class of nonlinear systems with smooth compensation,” IEE Proc. Control Theory Appl.. 153, (6 ), 647 –657 (2006).CrossRef
Peng  Y. F., and Lin  C. M., “Intelligent hybrid control for uncertain nonlinear systems using a recurrent cerebellar model articulation controller,” IEE Proc. Control Theory Appl.. 151, (5 ), 589 –600 (2004).CrossRef
Theocharis  J. B., “A high-order recurrent neuro-fuzzy system with internal dynamics: application to the adaptive noise cancellation,” Fuzzy Sets Syst.. 157, (4 ), 471 –500 (2006). 0165-0114 CrossRef
Stavrakoudis  D. G., and Theocharis  J. B., “A recurrent fuzzy neural network for adaptive speech prediction,” in  Proc. IEEE Int. Conf. on Systems, Man and Cybernetics , pp. 2056 –2061 (2007).CrossRef
Koschmieder  H., “Theorie der horizontalen sichtweite,” in Beitrage zur Physik der Freien Atmosphare. ,  Keim & Nemnich ,  Munich, Germany  (1924).
McCartney  E. J., Optics of the Atmosphere: Scattering by Molecules and Particles. ,  Wiley ,  New York, NY  (1976).
Zhang  X., and Wandell  B. A., “Color image fidelity metrics evaluated using image distortion maps,” Signal Process.. 70, (3 ), 201 –214 (1998). 0165-1684 CrossRef
Hautiere  N.  et al., “Blind contrast restoration assessment by gradient ratioing at visible edges,” Image Anal. Stereol.. 27, (2 ), 87 –95 (2008).CrossRef

Jyun-Guo Wang received his MS degree in computer science and information engineering from Chaoyang University of Technology, Taichung, Taiwan, in 2007. He is currently a PhD candidate in the Institute of Computer and Communication Engineering, Department of Electrical Engineering in National Cheng Kung University. His research interests are in the areas of neural networks, fuzzy systems, and image processing.

Shen-Chuan Tai received his BS and MS degrees in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1982 and 1986, respectively, and his PhD in computer science from the National Tsing Hua University, Hsinchu, Taiwan, in 1989. He is currently a professor in the Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan. His teaching and research interests include data compression, DSP, VLSI array processors, computerized electrocardiogram processing, and multimedia systems.

Cheng-Jian Lin received the PhD in electrical and control engineering from the National Chiao-Tung University, Hsinchu, Taiwan, in 1996. Currently, he is a distinguished professor in the Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung, Taiwan. His current research interests include soft computing, pattern recognition, intelligent control, image processing, bioinformatics, and Android/iPhone program design.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Jyun-Guo Wang ; Shen-Chuan Tai and Cheng-Jian Lin
"Transmission map estimation of weather-degraded images using a hybrid of recurrent fuzzy cerebellar model articulation controller and weighted strategy", Opt. Eng. 55(8), 083104 (Aug 12, 2016). ; http://dx.doi.org/10.1117/1.OE.55.8.083104


Figures

Graphic Jump Location
Fig. 2
F2 :

Flow diagram of the proposed dehazing algorithm.

Graphic Jump Location
Fig. 3
F3 :

Structure of the RFCMAC model.

Graphic Jump Location
Fig. 4
F4 :

Flowchart of the proposed structure and parameter learning.

Graphic Jump Location
Fig. 5
F5 :

(a) Original haze image; (b)–(j) the results using different α and β values, where (b) α=0.1, β=0.1; (c) α=0.1, β=0.5; (d) α=0.1, β=0.9; (e) α=0.5, β=0.1; (f) α=0.5, β=0.5; (g) α=0.5, β=0.9; (h) α=0.9, β=0.1; (i) α=0.9, β=0.5; and (j) α=0.9, β=0.9.

Graphic Jump Location
Fig. 6
F6 :

Estimation using an average value: (a) original image; (b) estimate of transmission map; (c) image of atmospheric light; and (d) scene radiance recovery.

Graphic Jump Location
Fig. 7
F7 :

Removal of halo artifacts for different images. (a) Halo artifacts and (b) removal of the halo artifacts.

Graphic Jump Location
Fig. 8
F8 :

Comparison of dehazing results using various methods.

Graphic Jump Location
Fig. 9
F9 :

Comparison of dehazing techniques for city scene images: (a) ny12 and (b) ny17.

Graphic Jump Location
Fig. 10
F10 :

Comparison of dehazing techniques for mountain scene images: (a) y01 and (b) y16.

Tables

Table Grahic Jump Location
Table 1Estimation results of color contrast using various methods.
Table Grahic Jump Location
Table 2Performance of different algorithms with e and r¯.

References

Stark  J. A., “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process.. 9, (5 ), 889 –896 (2000). 1057-7149 CrossRef
Rahman  Z., , Jobson  D. J., and Woodell  G. A., “Retinex processing for automatic image enhancement,” J. Electron. Imaging. 13, (1 ), 100 –110 (2004).CrossRef
Scheunders  P., “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process.. 11, (5 ), 568 –575 (2002). 1057-7149 CrossRef
Ancuti  C. O.  et al., “A fast semi-inverse approach to detect and remove the haze from a single image,” in  Proc. of the Asian Conf. on Computer Vision , pp. 501 –514 (2010).
Oakley  J. P., and Satherley  B. L., “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process.. 7, (2 ), 167 –179 (1998). 1057-7149 CrossRef
Tan  K. K., and Oakley  J. P., “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A. 18, (10 ), 2460 –2467 (2001).CrossRef
Narasimhan  S. G., and Nayar  S. K., “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.. 25, (6 ), 713 –724 (2003). 0162-8828 CrossRef
Schechner  Y. Y., , Narasimhan  S. G., and Nayar  S. K., “Polarization based vision through haze,” Appl. Opt.. 42, (3 ), 511 –525 (2003). 0003-6935 CrossRef
Pandian  P. S., , Kumaravel  M., and Singh  M., “Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation,” Med. Biol. Eng. Comput.. 47, (11 ), 1197 –1206 (2009). 0140-0118 CrossRef
Shwartz  S., , Namer  E., and Schechner  Y., “Blind haze separation,” in  2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’06) , pp. 1984 –1991 (2006).CrossRef
Schechner  Y., , Narasimhan  S., and Nayar  S., “Instant dehazing of images using polarization,” in  Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01) , pp. 325 –332 (2001).CrossRef
Hautière  N., , Tarel  J. P., and Aubert  D., “Towards fog-free in-vehicle vision systems through contrast restoration,” in  IEEE Conf. on Computer Vision and Pattern Recognition , pp. 1 –8 (2007).CrossRef
Kopf  J.  et al., “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.. 27, (5 ), 1 –10 (2008). 0730-0301 CrossRef
Schechner  Y., and Averbuch  Y., “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell.. 29, (9 ), 1655 –1660 (2007). 0162-8828 CrossRef
Namer  E., , Shwartz  S., and Schechner  Y., “Skyless polarimetric calibration and visibility enhancement,” Opt. Express. 17, (2 ), 472 –493 (2009). 1094-4087 CrossRef
Fattal  R., “Single image dehazing,” ACM Trans. Graph.. 27, (3 ) (2008).CrossRef
Tan  R. T., “Visibility in bad weather from a single image,” in  Proc. IEEE Conf. Computer Vision and Pattern Recognition , pp. 1 –8 (2008).CrossRef
He  K., , Sun  J., and Tang  X., “Single image haze removal using dark channel prior,” in  Proc. IEEE Conf. Computer Vision and Pattern Recognition , pp. 1956 –1963 (2009).CrossRef
Tarel  J. P., and Hautiere  N., “Fast visibility restoration from a single color or gray level image,” in  Proc. IEEE Int. Conf. Computer Vision , pp. 2201 –2208 (2009).CrossRef
Nishino  K., , Kratz  L., and Lombardi  S., “Bayesian defogging,” Int. J. Comput. Vision. 98, (3 ), 263 –278 (2012). 0920-5691 CrossRef
Levin  A., , Lischinski  D., and Weiss  Y., “A closed form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell.. 30, (2 ), 228 –242 (2008). 0162-8828 CrossRef
Gibson  K., and Nguyen  T., “An analysis of single image defogging methods using a color ellipsoid framework,” EURASIP J. Image Video Process.. 2013, (37 ) (2013).CrossRef
Fattal  R., “Dehazing using color-lines,” ACM Trans. Graph.. 34, (1 ), 13  (2014). 0730-0301 CrossRef
Ancuti  C. O., and Ancuti  C., “Single image dehazing by multi-scale fusion,” IEEE Trans. Image Process.. 22, (8 ), 3271 –3282 (2013). 1057-7149 CrossRef
Xianzhong  C., and Shin  K. G., “Direct control and coordination using neural networks,” IEEE Trans. Syst., Man, Cybern.. 23, (3 ), 686 –697 (1993). 0018-9472 CrossRef
Wu  S., and Wong  K. Y. M., “Dynamic overload control for distributed call processors using the neural network method,” IEEE Trans. Neural Networks. 9, (6 ), 1377 –1387 (1998). 1045-9227 CrossRef
Yamada  T., and Yabuta  T., “Dynamic system identification using neural networks,” IEEE Trans. Syst., Man, Cybern.. 23, (1 ), 204 –211 (1993).CrossRef
Lu  S., and Basar  T., “Robust nonlinear system identification using neural-network models,” IEEE Trans. Neural Networks. 9, (3 ), 407 –429 (1998). 1045-9227 CrossRef
Perez  C.A.  et al., “Linear versus nonlinear neural modeling for 2-D pattern recognition,” IEEE Trans. Syst., Man, Cybern. A. 35, (6 ), 955 –964 (2005).CrossRef
Oong  T. H., and Isa  N. A. M., “Adaptive evolutionary artificial neural networks for pattern classification,” IEEE Trans. Neural Networks. 22, (11 ), 1823 –1836 (2011). 1045-9227 CrossRef
Nair  S. K., and Moon  J., “Data storage channel equalization using neural networks,” IEEE Trans. Neural Networks. 8, (5 ), 1037 –1048 (1997). 1045-9227 CrossRef
You  C., and Hong  D., “Nonlinear blind equalization schemes using complex-valued multilayer feedforward neural networks,” IEEE Trans. Neural Networks. 9, (6 ), 1442 –1455 (1998). 1045-9227 CrossRef
Yang  Y. S.  et al., “Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network,” IEEE Trans. Biomed. Eng.. 48, (6 ), 718 –730 (2001). 0018-9294 CrossRef
Ma  L., and Khorasani  K., “Facial expression recognition using constructive feedforward neural networks,” IEEE Trans. Syst., Man, Cybern. B. 34, (3 ), 1588 –1595 (2004).CrossRef
Albus  J. S., “A new approach to manipulator control: the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr.. 97, (3 ), 220 –227 (1975).CrossRef
Albus  J. S., “Data storage in the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr.. 97, (3), 228 –233 (1975).CrossRef
Lee  Z. J., , Wang  Y. P., and Su  S. F., “A genetic algorithm based robust learning credit assignment cerebellar model articulation controller,” Appl. Soft Comput.. 4, (4 ), 357 –367 (2004).CrossRef
Leu  Y. G.  et al., “Compact cerebellar model articulation controller for ultrasonic motors,” Int. J. Innovative Comput., Inf. Control. 6, (12 ), 5539 –5552 (2010).
Su  S. F., , Ted  T., and Huang  T. H., “Credit assigned CMAC and its application to online learning robust controllers,” IEEE Trans. Syst., Man, Cybern., B. 33, (2 ), 202 –213 (2003).CrossRef
Wu  J., and Pratt  F., “Self-organizing CMAC neural networks and adaptive dynamic control,” in  Proc. of the 1999 IEEE Int. Symp. on Intelligent Control/Intelligent Systems and Semiotics , pp. 259 –265 (1999).CrossRef
Commuri  S., and Lewis  F. L., “CMAC neural networks for control of nonlinear dynamical systems: structure, stability, and passivity,” Automatics. 33, (4 ), 635 –641 (1997).CrossRef
Hwang  K. S., and Lin  C. S., “Smooth trajectory tracking of three-link robot: a self-organizing CMAC approach,” IEEE Trans. Syst., Man, Cybern. B. 28, (5 ), 680 –692 (1998).CrossRef
Lee  H. M., , Chen  C. M., and Lu  Y. F., “A self-organizing HCMAC neural-network classifier,” IEEE Trans. Neural Networks. 14, (1 ), 15 –27 (2003). 1045-9227 CrossRef
Jou  C. C., “A fuzzy cerebellar model articulation controller,” in  Proc. IEEE Int. Conf. Fuzzy System , pp. 1171 –1178 (1992).CrossRef
Lane  S. H., and Militzer  J., “A comparison of five algorithm for the training of CMAC memories for learning control systems,” Automatica. 28, (5 ), 1027 –1035 (1992). 0005-1098 CrossRef
Lin  C. S., and Li  C. K., “A new neural network structure composed of small CMACs,” in  Proc. IEEE Conf. Neural Systems , pp. 1777 –1783 (1996).CrossRef
Reay  D. S., “CMAC and B-spline neural networks applied to switched reluctance motor torque estimation and control,” in  The 29th Annual Conf. of the IEEE Industrial Electronics Society , Vol. 1, , pp. 323 –328 (2003).CrossRef
Chen  S., and Zhangm  D., “Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure,” IEEE Trans. Syst., Man, Cybern. B. 34, (4 ), 1907 –1916 (2004).CrossRef
Su  S. F., , Lee  Z. J., and Wang  Y. P., “Robust and fast learning for fuzzy cerebellar model articulation controllers,” IEEE Trans. Syst., Man, Cybern. B. 36, (1 ), 203 –208 (2006).CrossRef
Wu  T. F., , Tsai  P. S., and Wang  L. S., “Adaptive fuzzy CMAC control for a class of nonlinear systems with smooth compensation,” IEE Proc. Control Theory Appl.. 153, (6 ), 647 –657 (2006).CrossRef
Peng  Y. F., and Lin  C. M., “Intelligent hybrid control for uncertain nonlinear systems using a recurrent cerebellar model articulation controller,” IEE Proc. Control Theory Appl.. 151, (5 ), 589 –600 (2004).CrossRef
Theocharis  J. B., “A high-order recurrent neuro-fuzzy system with internal dynamics: application to the adaptive noise cancellation,” Fuzzy Sets Syst.. 157, (4 ), 471 –500 (2006). 0165-0114 CrossRef
Stavrakoudis  D. G., and Theocharis  J. B., “A recurrent fuzzy neural network for adaptive speech prediction,” in  Proc. IEEE Int. Conf. on Systems, Man and Cybernetics , pp. 2056 –2061 (2007).CrossRef
Koschmieder  H., “Theorie der horizontalen sichtweite,” in Beitrage zur Physik der Freien Atmosphare. ,  Keim & Nemnich ,  Munich, Germany  (1924).
McCartney  E. J., Optics of the Atmosphere: Scattering by Molecules and Particles. ,  Wiley ,  New York, NY  (1976).
Zhang  X., and Wandell  B. A., “Color image fidelity metrics evaluated using image distortion maps,” Signal Process.. 70, (3 ), 201 –214 (1998). 0165-1684 CrossRef
Hautiere  N.  et al., “Blind contrast restoration assessment by gradient ratioing at visible edges,” Image Anal. Stereol.. 27, (2 ), 87 –95 (2008).CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement


 

  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.