Weight-based colour constancy using contrast stretching

One of the main issues in colour image processing is changing objects’ colour due to colour of illumination source. Colour constancy methods tend to modify overall image colour as if it was captured under natural light illumination. Without colour constancy, the colour would be an unreliable cue to object identity. Till now, many methods in colour constancy domain are presented. They are in two categories; statistical methods and learning-based methods. This paper presents a new statistical weighted algorithm for illuminant estimation. Weights are adjusted to highlight two key factors in the image for illuminant estimation, that is contrast and brightness. The focus was on the convex part of the contrast stretching function to create the weights. Moreover, a novel partitioning mechanism in the colour domain that leads to improvement in efﬁciency is proposed. The proposed algo-rithm is evaluated on two benchmark linear image databases according to two evaluation metrics. The experimental results showed that it is competitive to the statistical state of the art methods. In addition to its low computational cost, it has the advantage of improving the efﬁciency of statistics-based algorithms for dark images and images with low brightness contrast. Moreover, it is robust to camera change types.


INTRODUCTION
The colour of objects plays an important role in machine vision applications. But one of the difficulties in processing colour images is that object colour is not constant and changes with ambient light variations. In other words, if objects are pictured under different lights, the same objects are seen with different colours in recorded pictures. This phenomenon shows the dependency of object colour to illuminating light colour. The ability to recognize the actual colour of objects under different illuminations is called colour constancy. At first, it seems to be trivial because human visual system (HVS) naturally to some extent performs the colour constancy but it is unknown how it is done [1]. The most important application of colour constancy in machine vision is colour object recognition, colour object tracking, and image classification [2]. Colour constancy also is used in digital photography. Till now, lots of methods in the colour constancy domain are presented, a good review is available in [3,4]. Colour constancy methods usually are categorized into two main categories; statistics-based methods and learningbased methods. This paper focuses on statistics-based methods. These methods use mathematical operations and statistical For this reason, they have low computational complexity. These methods are learning-free and have a few parameters. One of the main challenges in this category is that statistics-based methods operate based on a certain assumption. Dependence on the assumption reduces the efficiency range for these methods. In other words, violation of assumption leads to a bad result. This has turned statistics-based methods into methods with limited and specific performance. This paper divides the methods of this group into three subgroups; methods that equally use all pixels in the image, methods that select some pixels, and weight-based methods. Although all the pixels in an image contain information about the colour of light source, they are not equally important. Some pixels are more informative in estimating the colour of light source. Basically, considering all pixels having the same importance in estimating the illumination or neglecting some pixels in this process is not a good idea. On the other hand, there are weight base methods that consider the importance of pixels in estimating the colour of a light source.
Most of the methods proposed in the colour constancy domain are learning-based. These methods estimate the colour of a light source using machine learning mechanisms and one or more statistics-based methods. In recent years, most learning-based methods in colour constancy use deep learning mechanisms for illuminant estimation. Learning-based methods somewhat generalize the problem of colour constancy and increase the efficiency range of the algorithm. These methods usually categorize the problem into several different modes and use the best statistics-based method for each mode based on its characteristics. This is why learning-based methods are more accurate than statistics-based methods. Even though these methods are more general in comparison with statistics-based methods, basically they cannot take into account all possible cases. For example, these methods cannot be trained in all possible lights for a scene in the learning process. Although they are usually more accurate than statistics-based methods, they have a more complicated implementation because they need to be trained. Another challenge in this category is that learningbased methods, especially deep learning methods have a very large number of parameters, so they need a very large number of images in the training phase. On the other hand, the number of images in the colour constancy database is very low. So these methods need data augmentation and a pre-training phase. Also these methods are very biased toward their training phase which leads to high dependency on camera type and camera sensors and high sensitivity to noise. The experimental results have shown that while evaluating learning-based methods on a database, if training images are selected from another database, the accuracy of these methods will be significantly reduced [5]. Basically, these methods have a much higher computational complexity than statistics-based methods. In addition, since learning-based methods operate based on statistics-based methods, learning-based methods will also improve if better statistics-based methods are provided. According to these issues, it is important to propose an efficient statistics-based method. To this end, we look for a weight-based approach in which all pixels participate in estimating the colour of light source, but their participation is determined based on their importance in estimating the colour of light source. Due to the fact that brighter pixels have more information about the colour of light source and the distance between intensity of pixels play an important role in estimating the colour of light source, the importance of pixels is determined based on two factors; brightness and contrast. As a result, a convex function is used for weighting. In other words, since the slope of the convex function increases non-linearly with increasing colour intensity, these functions are more intense on pixels with higher colour intensities. Convex functions also increase the contrast. Thus, in the process of estimating the colour of light source, by increasing the role of more effective pixels in estimating the colour of light source, the efficiency of the proposed method is increased.
This paper analyzes the rate of image pixels that are involved in the calculation related to estimating the colour of light source. We discuss methods that radically select some pixels for illuminant estimation and ignore the others, the methods that equally use all pixels, and the ones that increase their efficiency with implicit or explicit weights. We also investigate the basic colour constancy algorithms from the convexity viewpoint. We analyze why convex functions have better performance in colour constancy. To this end, we propose a novel weight-based algorithm for illuminant estimation.
The proposed weight function is a contrast stretching transformation function that assigns weights to image pixels based on their brightness and contrast, then by applying these weights on image pixels, the illuminant is estimated. It is worth mention that according to Cheng classification [6], our proposed method is based on the colour domain rather than the spatial domain. Thus we have no spatial operation like filtering, but noise reduction is performed by a novel partitioning mechanism in the colour domain. Our proposed algorithm is robust, efficient, and easy to implement with a low computational cost. Moreover, our algorithm produced competitive results compared to statistical state of the art methods. Finally, we compare our results with related algorithms reported in the colour constancy website [7], NUS website [8] and new state of the art methods. The proposed method is evaluated on two benchmark databases according to recovery angular error evaluation metric, its experimental results are compared to the state of the art methods. The major contributions of this paper include: • Learning-free statistics-based method with simple implementation, low computational cost, and low execution time. • A weight-based method that considers the importance of pixels in estimating the colour of a light source. • Setting the weights to highlight the two key factors; the contrast and brightness of the image. • Presenting a unique weight function for each image in which weights are based on the image features. So it has more generality and a larger range of efficiency in comparison to other statistics-based methods. • The efficiency is independent of the type and sensors of the camera. • Presenting a novel mechanism for noise reduction in the colour domain.
The remainder of this paper is organized as follows: In Section 2, colour constancy is explained. The statistics-based and learning-based algorithms are explained in Section 3. In Section 4, the proposed algorithm is explained. In Sections 5 and 6 the experimental results are presented and discussed. Finally, Section 7 concludes this paper.

PROBLEM FORMULATION
From the computational science perspective, colour constancy is the process of modifying an image taken under an unknown illuminant such that the modified image seems to be taken under a known canonical light source, which usually is the white light. In this process, first, the colour of light source is estimated, this step is called illuminant estimation. Then the estimated colour of light source is replaced with a canonical light source. This step is called image correction. As a result, an image is created as if it was taken under canonical light illumination. This paper analyzes colour constancy issues, under the uniform light source assumption. To model this process, in Section 2.1, the process of image formation and in Section 2.2, the process of image correction are explained.

Image formation
Under the Lambertian assumption, formation of RGB values of an image f = ( f R , f G , f B ) T in pixel coordinate x captured by sensors of a digital camera is shown in Equation 1 [9,10]. Bold fonts in the following formulations denote vectors.
where the parameters are defined as follows: i = {R, G, B}: a colour channel R, G or B, x: spatial coordinate, : visible spectrum, : wavelength of the light source, I ( ): spectral distribution of the light source, R( , x): surface reflectance, i ( ): camera sensitivity function of the i-th channel. Also, equation 2 [9,10] shows the calculation of the observed colour of light source e.
As equation 2 shows, both the camera sensitivity function ( ( ) = ( R ( ), G ( ), B ( )) T ) and the light source spectrum (I ( )), create the observed colour of light source (e). The aim of most colour constancy methods is to estimate e, but the main problem is that the values I ( ) and ( ) required to calculate e are unknown. Due to lack of information in colour constancy problems, the existing approaches have to consider some assumptions in order to determine the observed colour of light source.

Image correction
In this stage, the image should be corrected using illumination vector e = (e1, e2, e3) derived from illuminant estimation step. In other words, all the colours of the original image under the unknown illuminant should be converted to colours under canonical light source, this conversion is called chromatic adaptation which can be modelled by the Von Kries Model or Diagonal Mapping [11] as illustrated in equations 3 and 4: where f u = (R u , G u , B u ) T represents the image under the unknown light source (u), f c = (R c , G c , B c ) T is the same image transformed as affected by canonical illuminant (c), and D u,c describes an diagonal matrix which maps the colours taken under an unknown illumination, into corresponding colours under the canonical illuminant. In other words, in this process, at first, gray levels of image pixels are divided by the estimated illuminant, then they are multiplied in the canonical light source. Note that white light (1∕ 3) T is used as the canonical light.

RELATED WORKS
In general, colour constancy methods can be divided into two groups, statistics-based and learning-based. In the following, we provide a detailed description of methods in both two groups.

Statistics-based algorithms
These methods use statistical information, mathematical calculations, low-level and high-level features to estimate illumination. From the percentage of pixels involved in illuminant estimation, statistics-based methods can be categorized into three groups; in the methods of first group, all pixels equally attend in illuminant estimation. One of the most popular and applicable methods in this group is Gray-World [12]. It is based on Gray-World assumption which says, under canonical light source the average reflectance of objects in the scene is achromatic. Based on this assumption many methods have been presented until now. On the other hand, the second group provides the methods that radically select some pixels for processing and ignore the others. One of the oldest and simplest such methods is the Max RGB [13] in which the pixels with maximum intensity in each RGB channel are considered as the colour of light source in that channel. The Reconsidered Max-RGB [14], firstly divides the input image into smaller sub-images, then runs the Max-RGB algorithm independently on each of randomly sampled sub-images of the input image. Then it combines the resulting estimates by simple averaging after identifying and eliminating the outliers.
Other examples in this group that select a percentage of pixels for illuminant estimation are Bright Pixel (BP) method [15] which uses a few percents of the brightest pixels in the image for illuminant estimation, Cheng method [6] sorts pixels in colour distribution based on distance from mean and then by applying PCA on a percent of brightest and darkest pixels, estimates the illuminant. Gray Pixels (GP) [16] developed a simple photometric method to evaluate the grayness of a pixel and used this to select a percentage of pixels for illuminant estimation. Gray Index (GI) [5] is an extension of Gray Pixel method. It finds the Grayness Index for pixels using the Dichromatic Reflection Model (DRM), then selects a percentage of most gray pixels for illuminant estimation.
Double-Opponency (DO) [17] is a biological method that estimates illumination by simulating the retina mechanism.
The third group are weight-based methods that use all pixels in their algorithms but the pixels do not have the same role, so that pixels that are more informative in the light source colour will have a higher impact. Therefore, generally, the methods in third group have better performance in comparison with the first and second group. Note that in weight-based algorithms, weights can be implicit or explicit.
Although implicit weight-based methods do not explicitly use weights, but their structure is such that they force the important pixels to have a much stronger role in illuminant estimation. In the following, several methods with implicit weights are explained. The shade of gray method [18] which is based on the assumption that the p-th Minkowski norm of object reflectance in a scene is achromatic. By using the Gaussian filter for local smoothing of the image in algorithms of Shade of Gray, [19] presented a new general method called General Gray-World. By adding derivative operation to General Gray-World algorithms a more general framework was presented [19]. This framework is based on the assumption that the p-th Minkowski norm of the derivative of object reflectance in a scene is achromatic [19]. Equation 5 demonstrates these methods: where p is the Minkowski norm and n is the order of derivative, k is a constant factor eliminated by normalization. The normalized colour of light source e is obtained asê = ke∕|ke|. Note that derivative of image f in channel i is the result of convolving the image with Gaussian derivative filters with standard deviation [20]: where s + t = n and * indicative convolution. Another method in this group is a normalization-based colour constancy method named Colour Constancy with Local Surface Reflectance [21] which uses the White Patch to weight the Gray-World so that both methods (White Patch and Gray-World) could be unified into one framework.
In the following, we explain several important explicit weightbased methods. In these methods by assigning weights to image pixels, the role of pixels in illuminant estimation is adjusted based on their efficiency, so that more informative pixels have a much stronger role in illuminant estimation. Explicit weights are continuous quantities in (0,1) specifying the efficiency of corresponding pixels. These methods use a special function for producing weights. In addition, these methods use a tunable parameter to adjust the strength of weight in terms of image environmental conditions [22,23]. For example, photometric weighted Gray-Edge [22] uses equations 7 and 8 to estimate light source: Specular Edge Weighting function is shown in equation 7 [22]: where f x is the derivative of image f , o x is the specular variant which is projection of f x on specular direction. |.| and ||.|| indicate absolute value and Euclidean norm respectively. Photometric edge weighting algorithms use equation 8 for estimation of light source [22]: where w( f ) is specular Edge Weighting function as indicated in equation 7 which assigns a weight to a pixel in x coordinate and K is a tunable parameter for strengthening weights. As in equation 5, p is the Minkowski norm and n is the order of derivative, k is a constant factor eliminated by normalization. The normalized colour of light source e is obtained aŝ e = ke∕|ke|. Moreover, as explained in equation 6, derivative of image f in channel i is the result of convolving the image with Gaussian derivative filters with standard deviation [20]. In addition, they introduced an iterative weighting scheme to increase the precision of colour estimation of the light source by a sequential iteration mechanism.
Although this method presents a more accurate estimation for light source colour comparing with the usual Gray-Edge method but it has higher computational complexity. Saturation weighting algorithm [23] uses the saturated value in HSI colour space to produce weights. The saturation weighting function for this method is shown in equation 9 [23]: where S (f(x)) is saturation value for pixel in coordinate of x. By combining saturated weights with General Gray-World algorithm, the saturation weighting General Gray-World method is presented. This method uses equation 10 to estimate the light source: where w( f (x)) which is derived from equation 9 is the saturation weighting function that assigns a weight to a pixel in x coordinate and s is the saturation tunable parameter for strengthening weights. Similar to equation 5, p is the Minkowski norm and k is a constant factor that is eliminated by normalization, and normalized colour of light source e is obtained asê = ke∕|ke|.

Learning-based algorithms
Learning-based methods estimate illumination using information obtained from a training phase. One subset of this group are gamut-based methods [24][25][26][27] which utilize information related to possible colours observed under a light source in their training phase. In pixel-based gamut mapping [24], gamut mapping operates on pixel values. Edge-based gamut mapping [27] operates on image derivative values. In intersection-based gamut mapping [27], the intersection of multiple gamuts from different gamut-based methods is considered for gamut mapping. Other subsets of second group are Bayesian methods [28][29][30][31] that use Bayes theorem to estimate the colour of the light source. In this way, the illumination and surface reflectance are considered as random variables. In colour by correlation method [32], all possible colours for canonical illumination consider to be as a correlation matrix. Then for each image, the illuminant is estimated by calculating the correlation between the histogram of chromaticities in the image and the correlation matrix. The methods such as spatial correlation [33,34], Spatio-spectral Statistics method [35], Edge-based Spatio-spectral method [36] are based on spatial relationships between pixels. Moreover, methods such as [37] are a subset of learning-based methods that use neural networks in their training phase. Another group in learning-based methods estimate the illuminant by selecting the best colour constancy method for each image, based on the image characteristics. For example, in Natural Image Statistic method (NIS) [38], the selection is based on Weibull parameters, that is contrast and texture. In High-Level Visual Information method [39], the selection is based on high level information or semantic contents of the image. In selection and combination (CAS) [40] and indoor-outdoor image classification [41], the selection is based on whether the image is indoor or outdoor. Other methods estimate the illumination using the combination of several illuminant estimation algorithms. For example, the corrected-moment method [42], calculates different moments (P-norm moments, geometric-mean moments) of the image and the image gradient. Then the weighted average of these moments is calculated for illuminant estimation. Note that the weights are adjusted in the training phase. Regression Tree [43] is another combinational method in which for each image, four features are extracted. For each feature, there are K regression trees that estimate the illuminant. Finally, by combining the outputs of regression trees, the colour of light source is estimated. In [44] a luminance to chromaticity classifier is presented for estimating the illuminant. Then using stochastic gradient descent, the estimation error is minimized.

Algorithms based on deep learning
In recent years, most researchers in colour constancy field have been focused on using deep learning methods for illuminant estimation. in the following, we review these deep learning methods. Colour Constancy Using CNNs [45], is one of the first colour constancy methods which uses deep learning for illuminant estimation. This method first randomly gives nonoverlapping patches from the input images, then using a fivelayer CNN estimates the illumination for each patch, then using pooling the outputs of patches, estimates the colour of light source for the input image. [46] extends the previous method.
Its network can detect that the input image has one or multiple illuminations. It uses RFB kernel to combine the patches estimates. [47] is another method in deep learning in which eight layers' convolutional neural network for illuminant estimation are created after three sequential training phases. Convolutional Colour Constancy(CCC) [48] transforms image space from RGB to log-chrominance space. Then training and regression between the input image and the corrected image are done in this 2D space. In another word, finding the colour of light source in 3D RGB space reduces to a spatial localization task in 2D UV log-chrominance space. Fast Fourier Colour Constancy (FFCC) [49] is the extension of the CCC method. The difference is CCC search for UV vector in entire log-chroma space, but FFCC method has localization task on much more limited space. Colour Constancy with Confidence-weighted Pooling (FC4) [50] proposed the fully convolutional network in which, using weighted pooling of the last three feature maps, the colour of light source is estimated. DS-Net method [51] proposed a network consists of two sub-network; the first subnetwork estimates two illuminations based on two hypotheses and the second sub-network selects the best estimation of the first sub-network. Colour constancy with GAN [52] is a new method that using GAN generates an image-to-image translation between the input image which is taken under an unknown light source and the corrected image which is taken under the canonical light. [53] proposed a cascade of convolutional neural networks for illuminant estimation. The final illumination is achieved by combining the illuminations from each cascaded part by a new weighted multiply-accumulate loss function.
In [54] a Convolutional AutoEncoder (CAE) used for unsupervised pre-training. Then it is used to reconstruct images and subsequently estimate the illumination. This method using unsupervised pre-training increases generalization. Bag of Colour Features (BoCF) [55] proposed a network consisting of three parts; the first part is two convolutional layers, the second part is BoCF part and the third part consists of fully connected layers. The key concept of this network is BoCF which links convolutional layers to fully connected layers. The input of BoCF is feature maps and the output is histogram representation or codebook which is input for the fully connected layer. In addition, this method improved the results with two attention mechanisms. In [56], a new sensor independent network is presented. This network is consists of two sub-networks; in the first sub-network, the mapping matrix is trained. Then by operating this matrix on the image in the second sub-network, the mapped illumination is trained. Finally, by the combining mapping matrix and mapped illumination, the general illumination is estimated. In [57], the network uses semantic pooling for illuminant estimation. This method generates weighted maps using semantic pooling of image patches, then estimates the illumination for each patch, finally, global estimation is achieved by combining the estimates on patches. Convolutional Mean (CM) [58] is a simple two layers CNN in which the final illumination is estimated by weighted global average pooling. Quasiunsupervised method [59] presents a deep network in which achromatic pixels are extracted from the grayscaled version of the input image. Then this network calculates a weighted average by applying these extracted achromatic pixels as weights on the input image to estimate the illuminant. In [60] using kmeans clustering, n illuminations are generated for the input image. The proposed network combines these illuminations into one illuminant using the posterior probability distribution. Then the network is trained with backpropagation to achieve more accurate illumination. [61] proposed a sequential convolutional neural network in which the feature map reweight units (ReWU) are generated using the selective Gray Point algorithm [62]. In this way, the image feature maps are weighted so that informative pixels for illuminant estimation are highlighted. Then using average pooling, the final illumination is estimated.

WEIGHT-BASED COLOUR CONSTANCY USING CONTRAST STRETCHING (THE PROPOSED ALGORITHM)
As explained earlier, pixels with high intensity are more informative about the colour of light source. In methods like White Patch, MaxRGB and Bright method, pixels with maximum intensity or a few percent of bright pixels are involved in illuminant estimation and other pixels are ignored. Although these methods work well on images that have white or bright pixels, in case there is no bright pixel in the image or the image is relatively dark, they cannot provide an appropriate estimate. Also, these methods are very prone to noise or colour clipping [15]. In addition, in [6] Cheng argues that dark pixels also give information about the colour of light source and should not be completely ignored so he uses an equal percentage of bright and dark pixels and ignores the pixels in between. He also shows the increase of the distance between the intensity of pixels in the image, improves the efficiency. But considering the fact that bright pixels are more informative than dark ones, selecting an equal percentage of bright and dark pixels seems not to be appropriate. On the other hand, in some other methods like Gray-World, all pixels regardless of their intensity values, have the same role in estimating the colour of light source. In fact, these methods do not consider the importance of bright pixels. In addition, although the Gray-World method is very applicative in colour constancy, its performance depends very much on the existence of its assumptions. In other words, if the image has not enough variation in colours, the Gray-World method gives a very bad result [2].
Considering these problems we propose an algorithm in which all pixels participate in illumination estimation by assigning a weight to pixels proportional to their intensity. To increase the influence of pixels with high intensity and to increase the distance between the intensity of pixels in the image (to increase the contrast), we suggest using a convex transformation function that supports these ideas.
One such function is the Power-Law (Gamma) Transformation [63] as shown in Figure 1. The functions with > 1 (convex functions) have better performance in colour constancy compared to functions with < 1 (concave functions). In other word, the slope of convex functions increases with increasing  [63] intensity, therefore these functions cause more stretching on high intensities. As a result, in illuminant estimation process, by increasing the influence of pixels that are more informative about the colour of light source, the efficiency of colour constancy methods will be improved. In addition, convex functions by increasing the contrast, improve the efficiency. For these reasons, the Shade of Gray method with power parameter > 1 gives better result than the Gray-World algorithm with = 1.
Due to the fact that brighter pixels are more informative about colour of light source, we need a weight function that assigns higher weights to brighter pixels and lower weights to darker ones. In addition, we need a function to increase the contrast. So this function must be convex. We use a contrast stretching transformation function (CSTF) [64] to calculate the weights. In this function by increasing the image contrast, the distance between upper and lower brightness values increases non-linearly. It assigns a weight in (0,1) to every pixel gray level brightness. The weight function is shown in equation 11: where I = R + G + B is the gray level brightness of image, x is spatial coordinate, E indicates the slope of weight function and m is the maximum brightness value in input image.
The contrast stretching function shown in Figure 2 is a flexible weight function that can take different shapes by changing E and m. From the convexity perspective, this function has two parts, the first part from intensity 0 to m is convex and the second part from m to 255 is concave. By selecting m as the maximum brightness of an image, we use the convex part of the contrast stretching function. In addition, by this choice of The key superiority of the proposed contrast stretching function in comparison with other convex functions is its ability to adjust a position for the maximum stretch in each image. In other words, since the contrast stretching function reaches its maximum slope at position m, by selecting m as the maximum brightness of an image, the function will have its highest stretch at m. In addition, the slope of the function is adjusted based on maximum brightness, therefore as shown in Figure 2(b), for images with low brightness (i.e. low maximum brightness), the function has a sharper slope in comparison with bright images (i.e. high maximum luminance). As a result, the proposed algorithm improves the efficiency for dark images or images with low brightness contrast. We proposed a method in which the illumination is estimated using a weighted average. Equation 12 presents this algorithm mathematically.
where e i is the estimated illuminant in channel i = R, G, B, W E (I(x)) which is derived from equation 11 is the weight of pixel in spatial coordinate x and I = R + G + B is the brightness of the image. E is tunable parameter that adjusts the strength of weight function, f i (x) is gray level of pixel x in channel i = R, G, B. As in equation 5, k is a normalization factor, eliminated by normalization, and normalized colour of light source e is obtained asê = ke∕|ke|. Due to the fact that for each image, three colour channels use the same weight scheme, like the two previous weight-based methods, the denominator for equation 12 is ignored.

Partitioning the colour domain
As previously mentioned our proposed method works in the colour domain rather than the spatial domain thus we have no spatial operation like filtering. Therefore, for noise reduction, we group the pixels with close brightness. To this end, first, we sort all pixels based on their brightness value I (x) from minimum to maximum, then partition them to d equal parts. In each part, the mean value of pixels is assigned to that part. Therefore, all the pixels in each part get the same brightness value and consequently the same weight. This approach not only decreases the noise impact but also increases the algorithm's performance. Moreover, there are methods like Cheng [6] and Bright Pixels method [15] that use a dividing mechanism. These algorithms divide pixels into 100 parts then select only the parts with high and low brightness for illuminant estimation. But our method assigns a weight to each part of the entire brightness domain and contrary to the above mentioned methods does not ignore the brightness in any part. For this reason, our algorithm is more efficient. Figure 3 shows the steps of the proposed algorithm on a test image. It shows the colour distributions for Input and Weighted images. As seen, the colour distribution of weighted image is centralized around the colour of light source axis which is shown in red, so the weighted image provides more accurate estimates. The pseudo code of the proposed algorithm is given in Algorithm 1.

EXPERIMENTAL RESULTS
In this section, the proposed approach is evaluated using two benchmark databases. In addition, it is compared with other benchmark approaches and the state of the art methods. The organization of this section is as follows. The evaluation methods are described in Section 5.1. Image databases used for performance evaluation are presented in Section 5.2. Finally, the experimental results are presented and discussed.

Evaluation metric
The most famous and widely used criterion for evaluating colour constancy methods is the recovery angular error [65,66] which is the angle between actual and estimated illuminants. Equation 13 shows how to calculate the angular error.
where A angle denotes the angular error between the estimated light source e e and actual light source e a . Recently a new metric called reproduction angular error for evaluating the performance of illuminant estimation algorithms is suggested which has better efficiency than the traditional recovery angular error. In this criteria, the reproduction angular error as shown in equation 14, is calculated as the angle between white illuminant and actual illuminant divided by estimated illuminant [67,68].
where A angle denotes the recovery angular error between the white light source e c = [1, 1, 1] T and e u = e a ∕e e is the actual illumination e a divided by estimated light source e e . Note that in equations 13 and 14, ′′ . ′′ and ||.|| indicate the dot product of two vectors and the Euclidean norm of a vector, respectively.
To evaluate the performance of a colour constancy algorithm, the angular error is calculated on all images of a database and its median is used as the measure [66]. We shall mention that we evaluate the proposed method according to both recovery and reproduction angular errors.

Benchmark databases
Different image databases for evaluating colour constancy methods have been presented. This paper used two benchmark Linear databases for which the recovery angular error on the state of the art methods are reported in colour constancy website [7] and NUS database website [8].

Gehler-Shi ColorChecker database
The images of the original ColorChecker database [30] are not linear because they are obtained by an automatic camera postprocessing setting. Due to these issues, Shi reprocessed the RAW format images and created a new linear 12 bits image database [69]. This database contains 568 indoor and outdoor images. The Macbeth Colour Checker ground truth is placed in the scene prior to image acquisition. To avoid aliasing, Macbeth colour checker should be masked in the illuminant estimation procedure. The images of this database are high quality with no correlation.

Eight NUS databases
NUS database [6] consists of 1736 indoor and outdoor linear images which were photographed by eight different high quality cameras. Similar to ColorChecker database, each scene has the Macbeth Colour Checker ground truth which is masked during processing. Moreover, in this database, there are several images of the same scene taken by different cameras. For this reason, it has more comparison capabilities than other databases [6]. Tables 1 and 2 show the performance of proposed algorithm and some state of the art algorithms according to recovery angular error on NUS databases and Gehler-Shi Col-orChecker, respectively.
We shall mention that we highlighted the least errors for statistics-based methods and learning-based methods with green and yellow colour, respectively.
As it is seen, the proposed method (WCS with partitioning) is not only similar to the statistics-based state of the art method GI [5], but also it has a better performance compared to some complex and time-consuming learning-based methods. Note that GI and our proposed method (WCS with partitioning) have the same number of parameters. According to Table 1, our method is on the same level as the statistical state of the art method GI [5]. According to Table 2, our method outperforms all the statistics-based methods except GI [5].
It is worthy to mention that, although deep learning methods such as FFCC [49] and C4 [53] has the least error, they use a deep learning approach which has a complex implementation with a large number of parameters, requires long training time and strong hardware.
Note that for these two tables, the methods are reported from colour constancy website [7], NUS website [8] and new state of the art papers. Table 4 shows the performance of proposed algorithm, state of the art and some other algorithms on ColorChecker database according to reproduction angular error. All the results in this table are reported from colour constancy website [7]. Also, we achieved the results of statistical state of the art method GI and deep learning state of the art method FFCC in terms of reproduction angular error and added the results to this table. We shall note that the Reproduction angular error is a very new criterion and there are very limited databases and methods for which the reproduction angular error are reported. Therefore, it has fewer methods in comparison with Tables 1 and 2. It is worthy to mention that for each algorithm in Table 1 (related to of NUS databases), the error metrics are geometric mean across the images taken by the eight cameras as calculated in [48].
Note that in Table 1, the results are reported from NUS website [8] and state fo the art papers.
In Tables 1, 2 and 4, two versions of our algorithm are reported: the proposed method with and without partitioning. The results in these tables show that partitioning in the proposed algorithm decreases the angular error.
In addition, parameters for NUS databases and Linear Colour Checker (by Shi) database are tabulated in Table 3.

DISCUSSION
This paper proposes a novel statistical algorithm that uses weights extracted from image brightness contrast. Overall, by analyzing the experimental results on two benchmark databases, we conclude that although the proposed method does not perform as well as some deep learning methods, it has similar results to the statistical state of the art method. In addition, the computation cost of the proposed algorithm is very low. Moreover, due to the fact that in the proposed algorithm, for each image, the weight function is created exclusively based on the image features, it has more robustness and generality in comparison to other statistical methods.
In this section, we analyze the experimental results. We also measure the performance of our algorithm in comparison with state of the art methods. To this end, in Section 6.1, we analyze the effects of tuning parameters on performance of the proposed algorithm. Then we present the special ranges of parameters for which the proposed method has the best performance. In Section 6.2, we analyze and compare computation cost for the proposed and state of the art methods. In Section 6.3, we discuss why some methods have less error compared to our method. In Section 6.4, we explain about the application of our method. In Section 6.5, we discuss about the efficiency of our algorithm. Finally, in Section 6.6, we explain colour clipping.

Parameter setting
In the proposed method we have used two parameters, E for adjusting the slope of weight function and d for brightness partitioning. Note that the proposed method uses colour domain so it does not have smoothing or derivative order. We executed the proposed algorithm for different values of parameters E and d in different databases. By analyzing the impact of changing these two parameters on the algorithm performance, we detect the ranges or values of parameters for which the proposed algorithm has the best performance. Figure 4 shows the impact of changing tunable parameters E and d on the error statistics of the proposed method on eight NUS databases. Figures 4(c) and 4(d) show the standard deviations of error related to Figures 4(a) and 4(b), respectively. The average of mean, median, trimean, best-25 % and worst-25% angular errors are tabulated in the last column   It is worth mentioning that due to equations 11 and 12, when E = 0, all the weights become 1, so weighted average converts to the simple average, as a result, the proposed method converts to the basic Gray-World method. As it is seen in chart of parameter E (Figure 4(a)), E = 0 shows the basic Gray-World method for eight NUS databases. Moreover, in Figure 4(a) the error behaviour for all databases indicates that for E > 0 the error decreases so that after E = 5 the reduction rate decreases and the error slowly converges to a constant value. In addition, Figure 4(c) shows a similar behaviour for standard deviation changes. In other words, in these figures, for E = 0 which is the basic Gray-World, the standard deviation is maximum. For E > 0 it decreases and in E = 5 the standard deviation has its minimum value. Also, in Figures 4(b) and 4(d) at d = 0 the data is related to no partitioning. By increasing d, the error and standard deviation decrease reaching to their minimum value at d = 300. As it is seen in Figures 4(b) and 4(d), although partitioning improves the performance of the proposed algorithm, selecting a high value for d increases the angular error and the standard deviation.
By analyzing Figure 4 related to both parameters E and d, we conclude that, despite using different databases, changes in parameter values show similar results in those databases.  Therefore, the best performance of the proposed algorithm is in certain ranges or values for both two parameters. In addition, these ranges are robust to change of databases or cameras.

Computation cost
In this section, we analyze and compare the computation cost of the proposed method. In fact, the best way for such comparison is to provide the big O notation for each method. But, since such notation is not provided by other methods, we had to implement all methods on our PC, measure the actual execution time and use them as measures for computational cost. In addition, for learning-based methods, it was not possible to compute the time required for their learning phase using our PC. Therefore, in this section, at first in Table 5 we provide a general guideline for evaluating two groups of statistical and deep learning methods in terms of their required computation. Then, since our proposed method lies in statistical category, we implemented the main statistics-based methods and run them on our PC. In Table 6, the CPU execution time of these methods and that of our proposed methods are compared. All algorithms tabulated in this table are implemented using Matlab and run on a computer with Intel(R) Core(TM) i5 CPU2.4 GHz and 16 GB RAM.
In Table 5, we tabulate important criteria for measuring computational cost. As it is shown, in general, statistics-based methods have less computational cost than the deep learning methods.
Actually, statistical methods estimate the illumination using statistical features, therefore their execution times are very low and close to each other. For all methods of Table 6, parameters for execution and the information about errors are tabulated in Tables 3 and 2, respectively.
We shall note that the dominating factors in execution time for statistics-based algorithms relates to processes like smoothing and the order of derivative. For example, selecting a large size smoothing kernel requires more execution time.
That is why GGW and the state of the art method GI with more smoothing process have higher execution time and WP and GW method which do not use smoothing and derivative, have the least execution time. In addition, since the proposed method and Cheng method use colour domain rather than spatial domain, they do not require a derivative or smoothing process. So as shown in Table 6, the execution time for these two methods are partially low and close to each other. In comparison with the state of the art method GI, our method has lower execution time while having almost the same level of error.

Comparison with the state of the art methods
For comparison with GI method, we shall mention that GI has mulitple denoising operations like smoothing and filtering. In addition, despite having two tuning parameters, it uses other constant parameters in denoising process. It seems the constant parameters are set to achieve the best performance on the reported databases. Although these filtering process increases the accuracy, they also decrease the generality of algorithm. Therefore, as it is tabulated in Tables 1 and 2, GI has the best results for the ColorChecker database which has low number of images, but for NUS database having larger number of images and varieties. In comparison with our algorithm, GI has more Worst 25% error which is the mean of the highest 25% error values. In general, for NUS database the proposed algorithm and GI have very close results. On the other hand, as discussed in Section 6.2, GI achieve less error using denoising at the cost of more execution time. In addition, as shown in Figure 5, facing some challenging images our algorithm has better performance.

Agnostic colour constancy
As it is seen in Tables 1, 2 and 4, deep learning methods have least error. As explained earlier, learning-based methods are very biased prior to the training phase. Some new researches in colour constancy investigated the evaluation on Camera-Agnostic colour constancy [5,53]. The results of these researches showed that selecting the training images from a different database significantly decreases the accuracy of learning-based methods. In other words, considering Cameraagnostic evaluation for learning-based methods lead to a decline in performance even worse than the statistics-based methods. Due to these facts, basically, the comparison of statisticsbased methods and learning-based methods may not be fair. Figure 5 compares the efficiency of the proposed algorithms and some statistics-based and deep learning methods on several images of the two uncontrolled image databases of NUS and ColorChecker.
This figure shows the images for which the proposed method has significantly better performance in comparison with deep learning state of the art method FFCC [49], statistical state of the art method GI [5], Cheng [6] and Bright Pixels(BP) methods [15]. Note that images of NUS database are captured by 8 different cameras which names are displayed on top left corner of each image. The first, second and third rows in Figure 5, show the images of Linear ColorChecker databases. The other rows show the images of NUS databases.
As it is shown in Figure 5, there are relatively dark images or image with low brightness contrast for which the proposed algorithm has better performance in comparison with other methods. We intentionally choose such images to show the performance of our algorithm on challenging images compared to other algorithms.
In addition, as it is seen in NUS databases, for some images, the same scene is captured by different cameras. The high performance of our algorithm for a scene captured by different cameras reflects the stability of our algorithm regardless of the camera type.

6.4
The application of our method In practice, most images are not captured in a perfect or standard lighting or by professional photographers. Therefore, they might have been captured in a non-standard lighting environment. Such images are very challenging for colour constancy algorithms because brightness and contrast are two key factors in illuminant estimation. Our algorithm is a weight-based method in which weights are set based on highlighting these two factors. Therefore, as shown in Figure 5, facing these challenging images, our algorithm has better performance compared with other methods even state of the art methods. As a result, our method can be applied to a wide range of images.

The efficiency of our method
To analyze the efficiency of an algorithm there is a trade off between accuracy and computational cost. As it is tabulated in Tables 1, 2 and 4, our algorithm has lower error compared to the most statistical and learning methods. On the other hand, due to the pseudo-code of the proposed algorithm which is shown in algorithm 1, our method has a simple implementation. In addition, due to Sections 6.1 and 6.2, the proposed algorithm uses a few parameters, it is robust in comparison with learning-based and deep learning methods while showing lower computational Comparison of the proposed method (WCS), FFCC [49], GI [5], Cheng [6] and Bright Pixel method (BP) [15] to some images of NUS databases and linear ColorChecker database. For each image, the camera type is displayed in top left and the recovery angular error for proposed (WCS) and another method are displayed in bottom left and bottom right corners of each image, respectively cost. Moreover, as discussed in Section 6.3, our algorithm in comparison with the state of the art method GI, almost has the same level of error while having less execution time. In addition, as it is shown in Figure 5, our algorithm has a good performance facing challenging images which are taken in relatively dark scenes or scenes with low brightness contrast. Therefore as discussed in Section 6.4, our method can be applied to a various range of images. Therefore, we can conclude that, our proposed algorithm is an efficient method for illuminant estimation. However, since the number of parameters is an important criterion in computational cost, our method could take the best advantage of it by using only two parameters. Based on this criterion, in our future work, we plan to reduce the number of parameters to one or even parameter free method, while maintaining the efficiency of the method.

Colour clipping
When capturing an image, the pixels with intensities out of digital camera range will map within the camera range. This phenomenon is called colour clipping which affects the results of colour constancy methods [14,73]. So colour constancy algorithms usually consider a saturation threshold for maximum intensity in digital cameras. Then all intensities equal to or higher than the saturation threshold must be masked to reduce the influence of colour clipping.

CONCLUSION
This paper proposes a new weighted statistics-based method in which weights are created using contrast stretching. In this regard, two groups of colour constancy methods have been analyzed: statistics-based and learning-based methods. The performance of the proposed weight-based method is evaluated on two benchmark databases in which images were taken under a large number of different light sources. In this regard, experiments of the proposed method on ColorChecker and NUS databases showed a decrease of angular error up to 67% and 38% , respectively in comparison with Gray-World method. Moreover, our experiments showed that the newly proposed algorithm is competitive to the state of the art methods while requiring low computational complexity. In future work, we will extend the method presented here to suggest a learning-based method. For example, in classification-based learning methods, we can determine parameter E for each image class based on its features expecting to improve the accuracy.