An improved algorithm using weighted guided coefﬁcient and union self-adaptive image enhancement for single image haze removal

The visibility of outdoor images is usually signiﬁcantly degraded by haze. Existing dehazing algorithms, such as dark channel prior (DCP) and colour attenuation prior (CAP), have made great progress and are highly effective. However, they all suffer from the problems of dark distortion and detailed information loss. This paper proposes an improved algorithm for single-image haze removal based on dark channel prior with weighted guided coefﬁcient and union self-adaptive image enhancement. First, a weighted guided coefﬁcient method with sampling based on guided image ﬁltering is proposed to reﬁne the transmission map efﬁciently. Second, the k -means clustering method is adopted to calibrate the original image into bright and non-bright colour areas and form a transmission constraint matrix. The constraint matrix is then marked by connected-component labelling, and small bright regions are eliminated to form an atmospheric light constraint matrix, which can suppress the halo effect and optimize the atmospheric light. Finally, an adaptive linear contrast enhancement algorithm with a union score is proposed to optimize restored images. Experimental results demonstrate that the proposed algorithm can overcome the problems of image distortion and detailed information loss and is more efﬁcient than conventional dehazing algorithms.


INTRODUCTION
In recent years, computer vision has become popular in many scenarios, such as object detection, remote sensing imaging, and traffic monitoring. To achieve good performance in computer vision tasks, high-quality images are usually a prerequisite. Low-quality images collected in poor conditions, such as haze, mist, or air pollution may cause computer vision systems to work inefficiently [1]. Therefore, it is highly desirable to enhance the quality of image or video by reducing or eliminating the impact of haze on computer vision systems.
Early approaches for haze removal can be divided into the following four types, of which the first three are model-based and the last one is learning-based.
The first type is haze removal by image enhancement. For example, Jun et al. [2] proposed a global histogram equalization (GHE) algorithm to enhance the contrast by expanding the pixel dynamic distribution range. Because the global transformation could not guarantee the desired local enhancement, it could not handle the contrast in local regions well. Kim et al. [3] proposed local histogram equalization (LHE) to overcome the shortcomings of global transformation, as used by GHE. However, LHE usually suffers from the problems of block effect and high computational complexity. Retinex [4,5] was a popular algorithm because of its efficient handling of low-brightness blurred images, which describe the invariance of colour. However, Retinex is a very complex algorithm [6].
The second type is haze removal based on prior knowledge [7][8][9][10][11]. For example, Tan et al. [12] proposed a method for maximizing local contrast, based on the Markov random field, under the assumption that the local contrast of hazefree images is much higher than that of hazy images. Although the approach achieved impressive results, the restored images tended to be oversaturated. He et al. [13] presented a dehazing algorithm named dark channel prior (DCP), which is based on the physical model. They found that, for outdoor images, at least one colour channel had some pixels whose intensity was very low and close to 0. Therefore, DCP reduced the complexity of haze removal by using linear transformation filtering. However, DCP cannot correctly handle images containing large white regions, and it suffers from a colour distortion problem. Meng et al. [14] improved the DCP algorithm by exploring the inherent boundary constraint. The results showed that the edgepreserving ability of this improved algorithm is significantly better than that of DCP. However, it suffers from colour distortion problems. Berman et al. [15] proposed a dehazing algorithm named nonlocal image dehazing. Although this method achieves superior performance when compared with other typical methods, it fails to estimate the transmission in heavily hazy conditions. Zhu et al. [16] developed a colour attenuation prior (CAP) and created a linear model of scene depth for hazy images, and then learned the model parameters with numerous image patches. A disadvantage of the CAP-based method is that it is prone to underestimating the transmission of distant regions.
The third type of approach is based on image fusion technology [17]. Zhao et al. [18] proposed an algorithm called multi-scale optimal fusion (MOF) that uses transmission with a strategy for distinguishing and eliminating misestimated edges. It achieves a better performance than most state-of-the-art algorithms. However, it suffers from oversaturation and unrealistic colour because of its exposure-enhancing stage.
The fourth type is based on deep learning. These methods use nonlinear neural network models to learn feature parameters from a large number of sets of images with and without haze, and then apply the neural network to restore other haze images. For example, Cai et al. [19] proposed the first model based on convolutional neural networks (CNNs). This was a twostep hybrid model, named DehazeNet, to estimate the transmission map from hazy inputs, in which the atmospheric light was obtained using empirical rules. Although DehazeNet achieves good results, it requires an accurate transmission map and limits the capability of CNNs. To reduce the error caused by the processing of the intermediate hybrid model, several end-to-end neural networks for haze removal have been proposed [20,21]. One such representative network is AOD-Net [22], which integrates the intermediate processing into one pipeline, to generate a much clearer image. The fact that end-to-end architectures treat all spatial features and channels equally causes the origi-nal data distribution to be disrupted and results in the network having a weak expressive ability.
To address the shortcomings of recent algorithms for haze removal, namely dark distortion and detail information loss, this paper presents an improved algorithm based on DCP [13] with weighted guided coefficient and union self-adaptive image enhancement. Our algorithm consists of the following three steps. First, the weighted guided coefficient and bilinear interpolation [23] are introduced into the guided image filtering algorithm to refine the transmission map more efficiently. Second, the k-means clustering method and connected-component labelling [24][25][26] are adopted; these calibrate bright and nonbright areas in images to constrain the transmission and perform atmospheric light optimization, respectively. Finally, the restored images are optimized by the union self-adaptive image enhancement algorithm.
The three main contributions of the paper are as follows: 1. The weighted guided coefficient is an enhanced version based on guided image filtering [27], which can significantly improve the details of dehazed images. 2. The constraint of the transmission map is adopted to ensure that the transmission lines in a legal range. To obtain a global optimum of atmospheric light, connected-component labelling is adopted, to select the largest region as the candidate point. 3. To improve the visual effect, an image-enhancement module with a union self-adaptive enhancement mechanism is proposed.
The rest of this paper is organized as follows. We introduce related work in the next section and explain the details of our proposed algorithm in Section 3. We present experimental results in Section 4. Finally, we provide our concluding remarks in Section 5.

RELATED WORK
In this section, we briefly summarize the previous work on single-image dehazing algorithms based on DCP [13].

Dark channel
In colour images, He [13] found that the most local patches in a haze-free outdoor image J (x) contain some pixels that have very low intensity in at least one colour channel. The dark channel of J (x), denoted by J dark (x), is given by computing the local minimum intensity values of a hazy image of all RGB channels, as follows: where Ω(x) is a local patch centred at x, and J c (y) is the colour channel of c.

Atmospheric scattering model
In a homogeneous medium, the atmospheric scattering model, which has been widely used in previous studies on haze removal, is defined as follows: where x denotes the pixel coordinates in a hazy image, I(x) and J(x) represent a hazy image and the corresponding haze-free image, respectively, A is the global atmospheric light, and t(x) is the transmission map. The global atmospheric light A is calculated in the following manner [13, 16, [17]. The values of pixels in the dark channel are sorted in descending order, and then the first 0.1% of pixels with the highest intensity in the image I(x) are taken as the final atmospheric value. The transmission map t(x) can be expressed as t (x) = e − d (x) , where is the scattering coefficient of the atmosphere and d(x) is the scene depth. In this paper, we call the above process for solving A the candidate point method.

Transmission map estimation
According to the atmospheric scattering model and DCP, the medium transmission map t(x) can be defined as follows: where is the retention factor ( ∈ [0, 1]). To reduce the dark distortion caused by spatial perspective, usually takes the value 0.95 [13], and A c is the atmospheric light value in channel c.

Image restoration
After the above two parameters, t(x) and A, are obtained, a hazy image can be restored by the atmospheric scattering model. The restored model is defined as follows: where t 0 represents the threshold of the medium transmission map, which is used to avoid the negative effect of the noise produced by the dehazing algorithm, and usually takes the value 0.1 [16].

Guided image filtering
After obtaining the transmission map by (3), the guided filtering algorithm [27] is used for refining the image. Guided image filtering is, in essence, a local linear model. The output image of the filter is obtained from the input hazy image by a local linear transformation. The filter has an edgepreserving smoothing property and does not suffer from gradient reversal artefacts. Its definition is as follows: where I and q denote the guidance image and filter output image, respectively. a k and b k are the guided coefficients of a linear transformation, and are constants in a square window k centered at pixel k. The difference between the filter output image q and the filter input image p is minimized by a cost function constraint in the window, which is defined as follows: where is the regularization parameter, which prevents a k from being too large. Moreover, linear transformation coefficients a k and b k are defined as follows: where | | represents the number of pixels in k ; k and 2 k represent the mean and variance of the I in k , respectively; and p k is the mean value of p in k . Through the above constraints and linear coefficients, the filter output image can be computed by where i∈ω k , andā k andb k represent the average values of a k and b k , respectively. The DCP dehazing algorithm has achieved satisfactory results in practice. However, there are still two main problems. The first is that, because the difference between pixels around the window is ignored, texture blur appears at the edges of objects in the image, which reduces the detail information in the guided image filtering step. The second problem is that, if the value of the candidate point of A is estimated unreasonably, dark distortion and halo artefacts occur [14,15,19,28]. For example, for the hazy images shown in Figure 1(a), the two images dehazed by DCP, shown in Figure 1(b), exhibit dark distortion, and there are halo artefacts in the sky region in the image below.

PROPOSED ALGORITHM
To overcome the defects of the DCP dehazing algorithm described above, we propose an improvement to the algorithm. The proposed algorithm improves the original DCP algorithm in three respects. First, the transmission map is refined by the weighted guided coefficient and bilinear interpolation. Second, the k-means clustering method is adopted to calibrate bright and non-bright areas in images, to constrain the transmission map, and connected-component labelling is used for atmospheric light optimization. Finally, the restored images are enhanced by a self-adaptive enhancement algorithm with a union score. The architecture of the proposed algorithm is shown in Figure 2.

Weighted guided coefficient on transmission map estimation
As introduced in [27], the guided coefficients, which strongly affect the degree of retention of sharp edges, play an important role in transmission map estimation. Because the guided image filtering algorithm cannot preserve sharp edges and may easily produce halo artefacts [14,15,19,28], we introduce an edge-aware weighting to weight the guided coefficients. Moreover, the use of bilinear interpolation substantially reduces the time consumed by transmission map estimation. By using local variances of ω k with a radius of 3, for all pixels, the edge-aware weighting W I j can be defined as follows: where 2 I j and 2 I k ( j , k ∈ k ) represent the neighborhood variance centered at pixels j and k, respectively. N is the total number of pixels in the guidance image. In addition, ′ is a constant and is defined as ′ = (0.001 * (p max − p min )) 2 , where p max and p min represent the maximal and minimal values of pixels in the filter input image.
The edge weight reflects the proportion of the local window in the whole transmission map. Equation (10) shows that the pixels at edges correspond to larger weights than those in smooth areas. Because the edge information is weighted, the cost function defined in (6) in the window ω k can be redefined as follows: where a ′ k and b ′ k represent the weighted guided coefficients. Like a k and b k of (7) and (8), the solution to (11) can be obtained by linear regression [27]: where represents the mean value of the product of the filter input image p and the guide map I, and is used to maintain the linear relationship during the filtering process. The entropy value and the average gradient are similar for a guidance image, whether it is grayscale or colour, and grayscale images have fewer channels than colour images. Therefore, using grayscale images is faster than using the corresponding colour images during the transmission optimization stage. For the two grayscale hazy images shown in Figure 3(a), the transmission maps produced by guided image filtering and our proposed weighted guided coefficient are shown in Figures. 3(b) and 3(c), respectively. Comparing Figure 3(c) with Figure 3(b), it is apparent that the detail information has been increased significantly by our method.
To verify the time consumption of our proposed algorithm, we chose two images with resolutions of 600 × 450 pixels and 1024 × 763 pixels. Table 1 shows the time consumption of the two algorithms. It shows that the proposed algorithm can still quickly obtain the transmission map, even with weighting. Our The reason is that bilinear interpolation accelerates the whole processing in the weighted guided coefficient stage.

Optimization of transmission map and atmospheric light
During the process of image restoration, upper and lower bounds on transmission are imposed to avoid the influence of noise in the bright and non-bright areas, respectively; this noise is generated by the transmission map estimation. Generally, the restored model defined in (4) is suitable for defining the transmission map, but the model only limits the lower bound of the transmission. Because it does not consider the relation between transmission location and specific regions (i.e. k 0 and k 1 ), the constraint may not be appropriate. Therefore, there may be some unreasonable correction of details for the restored model. To solve this problem, the proposed algorithm limits the upper and lower bounds of transmission in image restoration by the use of k-means clustering. The restored model is modified as follows: where the lower bound t 0 and upper bound t 1 are taken as 0.1 and 0.9, respectively, as in [16]. Moreover, in this study, four cluster centres are set to categorize pixels in grey-level. The results are then combined with the first three categories as the dark colour region k 0 , and take k 1 , the right colour region, as the last category.
For the hazy images shown in Figure 4(a), the k-means clustering images are shown in Figure 4(b), and the transmis-sion marker matrices generated by combining the k-means are shown in Figure 4(c). As noted above, we segment each image into two regions: a dark colour region k 0 and bright colour region k 1 . Because the transmission map is constrained by the k-means marker matrix, the noise impact can be reduced effectively.
As discussed in Section 2.2, the candidate point method is usually used for selecting atmospheric light values. According to the analysis in Section 2, this method may easily generate areas of noise, such as the red frame-blocks shown in Figure 4(c). Compared with the original hazy images, although these areas are bright colour areas, they are obviously not reasonable candidate regions. To solve this problem, the proposed algorithm uses connected-component labelling to label all bright colour areas, as shown in Figure 4(d). It then chooses the largest region as the candidate point, and changes the value of all pixels in the other bright colour areas to the opposite value. Thus, the atmospheric light value constraint marking matrix is constructed, as shown in Figure 4(e). In this manner, unreasonable atmospheric light values can be effectively eliminated.

Union self-adaptive image enhancement
Extensive experiments showed that images restored by the atmospheric scattering model, under DCP-based methods, suffer from problems of partial darkness and other types of contrast imbalance [11,[16][17][18]30]. Most of the existing solutions to this problem use RGB-to-HSI colour space conversion enhancement [31,32]. Although the subjective effect of these methods is quite satisfactory, the original relation between saturation and intensity is lost because of spatial transformation during the image enhancement stage. To solve this problem, we propose a linear contrast enhancement model with a union score based on bright channel prior [31]. Generally, image contrast enhancement methods always assess the lightness and darkness of the whole image first, and then compensate related light in the target image. Based on the experiments and analysis of images in various scenarios, we found that the mean value of the bright channel can reflect the brightness level of the image. The brightness level is determined by the property that the bright channel map is derived from the maximum value of each colour channel in the same patch. For the three hazy images with different bright levels shown in Figure 5(a), the restored images produced by our algorithm are shown in Figure 5(b), and the bright channel maps of Figure 5(b) are shown in Figure 5(c). To measure the contrast of an image, we defineJ (x) as the mean value of the bright channel map, and T i (i∈ {0, … , n}) represents the threshold of grey-level i. Following the image quality assessment standard, the contrast enhancement model can be defined as follows: where J ′ (x) represents the image optimized by the linear model, and a i and b i (i∈ {1, … , n}) are the coefficients in the threshold range (T i−1 , T i ]. Because the pixel value of a grayscale image is always in the range [0, 255], T 0 and T n usually take the boundary values 0 and 255, respectively. Moreover,J (x), the image mean value, is defined as follows: where J bright (x) is the bright channel of J (x), which is defined as follows: Several metrics can be used to measure the performance of a dehazing algorithm. Among them, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are the two factors that are used most frequently. PSNR measures the ability of an algorithm to remove haze from a hazy image. A

End
higher PSNR of a dehazing algorithm corresponds to a better performance by the algorithm. SSIM measures the similarity between two images. To estimate the constant coefficients a i and b i , we define a metric named union score, which is a weighted sum of the PSNR and SSIM of the J (x): score = * PSNR + * SSIM (18) where, according to our experimental results, the PSNR weight is 0.05 and the SSIM weight is 1. PSNR and SSIM are the average PSNR and average SSIM, respectively, and are calculated over all J (x) and I (x). Moreover, we use a three-dimensional array [score i , a i , b i ] to record the relationship between the score and the parameters, which were constrained by the specific threshold in the threshold range (T i−1 , T i ]. Finally, we sort the three-dimensional array by the key-value score i . Because using a large score i may easily lead to the oversaturation of dehazed images, we compromise and choose the median value to obtain the corresponding a i and b i . The pseudocode for calculating the score is presented in Algorithm 1. Here, compare_psnr(I(x), J(x)) represents the function to calculate the PSNR between I(x) and J(x), com-pare_ssim(I(x), J(x)) represents the function to calculate the SSIM between I(x) and J(x), num[n] records the frequency in the range (T n-1 ,T n ], and Score i is the final relation between the union score and the parameters. In this paper, we set n = 2 and T 1 = 128. Therefore, we obtain two pairs of parameters: a 1 = 1.2 and b 1 = 5 (J (x) ∈ (T 0 , T 1 ]), and a 2 = 0.8 and b 2 = 6 (J (x) ∈ (T 1 , T 2 ]), as shown in Figures. 6(a) and 6(b), respectively.

EXPERIMENTAL RESULTS
To evaluate the proposed algorithm, we conducted two groups of experiments on challenging real-world images and images from the hazy image dataset presented in [34]. All algorithms were implemented in MATLAB 2018a. The computer used in the experiment had an Intel(R) Core (TM) i7-6700 CPU, with a frequency of 3.4 GHz, and 8 GB RAM. We compared the proposed algorithm with the following four popular algorithms for single-image haze removal: DCP [13], CAP [16], MOF [18], and LAP [35].

Qualitative evaluation
All of the dehazing algorithms are able to obtain quite good results by dehazing general outdoor images, so it is difficult to evaluate them visually. Because almost all existing dehazing algorithms are not sensitive to white, we first executed the algorithms on some challenging images with large white or grey regions, which were available on the Internet and published in earlier work. We then adopted the most frequently used hazy image dataset to evaluate the performance of all algorithms.

Evaluation on challenging images
The results of the five algorithms on dehazing challenging realworld images are shown in Figure 7. For the hazy images shown in Figure 7(a), the dehazing results of DCP, CAP, MOF, LAP, and our algorithm are shown in Figures. 7(b), 7(c), 7(d), 7(e), and 7(f), respectively. As shown in Figure 7(b), DCP worked well and removed most of the haze. However, DCP tended to darken some regions and produce halo artefacts. For example, the image of the woman under the tree in the first image is much darker than it should be, and the sky region in the third image is bluish. This is mainly because the DCP method uses a local window minimum operation. In contrast, the results of our algorithm are much more visually pleasing. Figure 7(c) shows the results of CAP, which show a very good visual effect. However, because of its inaccurate estimation of the depth in the distance, CAP tended to distort the colour, for example, the colour of the tree branches and forest in the second and third images. In contrast, because our algorithm considers the relation between image contrast and the image quality assessment indicator, the visual effect seems better. From Figure 7(d), we can observe that most of the haze was removed by MOF, and the details of the scenes and objects were restored well. However, the results  [13], CAP [16], MOF [18], LAP [35], and our proposed algorithm suffer significantly from over-enhancement. For example, the forest in the third image and the rocks in the bottom right of the fourth image are much brighter than they should be. In contrast, our algorithm performed better in terms of colour fidelity, and the visual effect of the results seems more natural. Figure 7(e) shows the results of LAP: this method could remove haze well, but the visual effect of the results seems poor on the challenging images. The algorithm tended to darken some  [13], CAP [12], MOF [14], LAP [33], and our proposed algorithm regions and cause colour distortion. In contrast, our proposed algorithm could retain colour fidelity, and the resulting visual effect is more pleasing.

Evaluation on hazy image dataset
The five algorithms were also tested on the hazy image dataset, as shown in Figure 8. Figure 8(a) shows the original hazy images. Figures. 8(b), (c), (d), (e), and (f) show the dehazed images produced by DCP, CAP, MOF, LAP, and our algorithm, respectively.
As we can observe, DCP performed well on the "wheat field", "Yosemite", and "Logos" images, but there is colour distortion in the sky region in the "Tiananmen" image and the dense building region in the "Manhattan" image. Moreover, many details were lost in the "Manhattan" image. The reason is that the dark channel prior is invalid when the scene brightness is similar to the atmospheric light [16]. Benefitting from the weighted guided coefficient and atmospheric light optimization, the dehazed images produced by our proposed algorithm (as shown in Figure 8(f)) show a clear outline in the overall details.
We can observe from Figure 8(c) that the results of CAP are quite visually pleasing, but there are still local area distortions and loss of detail; for example, the sky region in the "Tiananmen" image, the building region in the "Manhattan" image, and the upper right region and text in the distance in the "Logos" image. This is mainly because the CAP method strongly depends on the linear depth model. Unfortunately, this model is invalid when the scene depth contains dense objects. The proposed algorithm solves this problem by using a union self-adaptive image enhancement model. Figure 8(d) shows that the images restored by MOF yield over-enhanced visual artefacts; for example, the straw piles in the "wheat field" image, the men in red in the "Yosemite" image, and the orange bag at the top right in the "Logos" image. This is because the exposure enhancing stage in MOF changes the relationship between saturation and intensity. In contrast, As shown in Figure 8(e), LAP could remove most of the haze, but its performance degraded in distant scenery; for example, the top lines in the "wheat field" image and the light blue band at the top right in the "Logos" image. This is mainly because the LAP method relies on local patches, which may cause this unpleasant visual effect. In contrast, our proposed algorithm achieves much better results by using atmospheric light optimization.

Quantitative evaluation
To quantitatively evaluate the performance of all five algorithms described above, we first calculated the SSIM and PSNR of the dehazed images of Figure 8, for comparison, and then we employed the blind contrast enhancement assessment method [36] to further compare them. This assessment method consists of computing three indicators: The percentage of new visible edges e, the contrast restoration qualityr, and the proportion of pixels that are saturated after applying the dehazing method but were not before. Usually, a higher e andr indicate better performance. Moreover, to avoid the image contrast being increased too strongly, a smaller also means better performance. Figures 9 and 10 show histograms of the SSIM and PSNR, respectively, for the different images, and the comparisons of the indicators in the blind contrast enhancement assessment method are listed in Tables 2-4.

SSIM and PSNR
As noted in Section 3.3, a higher SSIM for an algorithm indicates a stronger ability to preserve edges and other details. Figure 9 shows that our algorithm achieved the highest SSIM on four of the five images. The reason that we obtained a lower SSIM value on the "wheat field" image may be that we were searching for a globally optimal result between the visual effect and the quantitative result at the union self-adaptive image enhancement stage. A higher PSNR for an algorithm indicates a stronger ability to remove haze. As shown in Figure 10, our algorithm produced the highest PSNR on the "Tiananmen", "wheat field", and "Manhattan" images. Although MOF outperformed our algorithm on the "Yosemite" and "Logos" images, the two images restored by MOF are overenhanced, as can be seen in Figure 8(d); for example, the man in red in the "Yosemite" image, and the orange bag at the top right in the "Logos" image. Moreover, the reason why the SSIM of our algorithm on the "Yosemite" and "Logos" images is relatively low may be the trade-off between PSNR and SSIM at the stage of union selfadaptive image enhancement.   Tables 2 and 3, we can see that the values of e andr in the results of our algorithm are more stable than those of the other four algorithms.
Although the other four algorithms achieved higher values for some images, the existence of more visible edges may indicate that the contrast was increased too strongly, or there may have been some spurious boundaries, affecting the results of MOF and LAP. For example, the images in Figure 8(d) are obviously overenhanced. As can be seen in Figure 8(e), there are evident boundaries in the sky region and top lines of the first and second images, respectively, which are actually spurious boundaries. The DCP and CAP algorithms removed haze well. However, they produced much more colour distortion. Table 4 shows that our proposed algorithm achieved the best results in all tests; this may indicate that our algorithm can achieve better results than the other four algorithms.

Computation time
The computation times of the five methods on the five images are shown in Table 5. For all five images, our algorithm took much less computation time than the LAP algorithm, but a little more time than the other three algorithms. This is caused by the computation related to the k-means clustering at the stage of weighted guided coefficient, and the computation of the atmospheric light optimum constraint.

CONCLUSION
To reduce the problems of dark distortion and detail information loss produced by recent methods of haze removal, we proposed an improvement to DCP for single-image haze removal. In the proposed algorithm, the weighted guided coefficient is applied to refine the transmission map for preserving sharper edges and eliminating the halo artefacts that frequently occur in images restored by DCP. In addition, k-means clustering and connected-component labelling were adopted, to constrain the transmission map and for atmospheric light optimization, respectively. Moreover, to ensure structural similarity and reduce colour distortion of the restored image, a union self-adaptive image enhancement algorithm was proposed. Various experiments demonstrated that our proposed algorithm is more effective than conventional algorithms for single-image haze removal. Different patch size scales reflect different receptive fields and influence the amount of detailed information to be obtained [37]. The experiments demonstrated that a larger patch size would decrease the edge-preserving performance.
There are two problems with our proposed method. The first is the time complexity of the computation related to the kmeans clustering at the stage of the weighted guided coefficient and the computation of the atmospheric light optimum constraint; these need to be reduced for real-time applications. The second problem is that the self-adaptive weight factors α and β, discussed in Section 3.3, which were decided by our experimental results, cannot accurately weight the relationship between SSIM and PSNR. An automated method for computing the selfadaptive weight factors should be developed in future work.