Adaptive colour restoration and detail retention for image enhancement

Computer vision-based crowd understanding and analysis technology has been widely used in public safety due to the rapid growth of population and the frequent occurrence of various accidents. Improving imaging quality is the key to improve the performance of crowd analysis, density estimation, target recognition, segmentation, and detection in computer vision tasks. Due to the complex imaging environment such as fog and low illumination, some images taken in outdoor environment often have the problems of colour distortion, lack of details, and the poor imaging quality, which affect the subsequent visual tasks. To improve the imaging quality and visual effect, an adaptive colour restoration and detail retention-based method is proposed for image enhancement. First, to overcome the problem of colour distortion caused by low illumination and fog, a multi-channel fusion based adaptive image colour restoration method is proposed. To make the enhancement result more consistent with human observation, the detail retention-based method is applied to enhance the details. Experimental results demonstrate that the authors’ results are effective and outperform the compared methods both in visual and objective evaluations.


INTRODUCTION
Public safety problems have widely attracted attentions due to the frequent occurrence of various safety accidents in the world. Crowd density analysis, detection and intelligent transportation are popular research topics in the field of public safety. Particularly, with the rapid growth of population, crowd counting and analysis have been widely used in video surveillance, traffic control and sports events. In recent years, the development of artificial intelligence technology enables computer vision-based crowd understanding and analysis to develop rapidly. The performance of computer vision-based technology depends on the imaging quality. Therefore, image enhancement is an important technology in computer vision tasks, such as person counting, crowd analysis, re-identification, target segmentation and recognition [1][2][3][4], which is widely used in public safety, military and other visual applications.
With the rapid development of deep learning and computer vision technology, many image vision-based intelligent tasks are realized, such as intelligent security and intelligent transportation. Based on the techniques of pattern recognition and machine learning, the person re-identification, vehicle detection This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and location can be easily accomplished by analysing the images captured by different cameras. However, these outdoor images are usually disturbed by the environment [5]. Images captured in rain, fog, smoke or low illumination will lose the contrast and colour fidelity, as shown on the left in Figure 1(a). Therefore, it is of great significance to improve the quality of degraded images. Removing haze and improving image contrast and colour shift can significantly increase the visibility of scene, which is very meaningful for human observation and computer vision tasks [6,7].
For the past decades, many physical model-based and nonphysical model-based models have been proposed. There are many problems that need solving in degraded images, for which image enhancement algorithms can be subdivided into haze removal, low illumination image enhancement, colour enhancement, underwater image enhancement and other types of enhancement algorithms [8][9][10][11]. Considering that the colour components are highly correlated, the retinex theory is proposed based on the human perception model of brightness and colour information for low-light enhancement task [12], as shown in Figure 1(b). Based on retinex theory, a large number of improved colour image enhancement methods have  [13]. The unique variable parameter of Gaussian function in SSR plays a decisive role in image enhancement. However, this method cannot be taken into account within dynamic range compression nor colour fidelity. To overcome this problem, an automated multiscale retinex with colour restoration (MSRCR) based method is proposed by S. Parthasarathy et al. [14], by adjusting the proportional relationship among the three colour channels in the original image highlighting the relatively dark area and improving the local contrast. Compared with SSR, the enhanced result by MSRCR is more realistic under people's visual perception. He et al. proposed a dark channel prior (DCP) based method [15] for image dehazing task by calculating the transmission map of the source degraded image. The DCP-based method assumed that the concentration of haze can be estimated by the degree of grey and white colour in the dark channel. DCP-based method has low complexity, which only needs to estimate the transmission map to enhance the degraded image. Li et al. proposed a robust retinex model for low-light image enhancement tasks which improved the performance of enhancing low-light images with intensive noise [16]. Recently, Hao et al. [17] proposed a semi-decoupled decomposition (SDD) based enhancement method which shows the advantage in low-light image enhancement task.
Underwater image enhancement is also widely concerned in image enhancement. Images captured underwater usually suffer from poor resolution, colour imbalance and blurring, as shown in Figure 1(c). The blue-green appearance at depth makes it difficult to perform underwater visual tasks. Due to the low contrast and poor visual effect caused by insufficient light underwater image enhancement is a more challenging task. Drews et al. proposed an underwater dark channel prior (UDCP)-based method which assumed that the predominant source of visual information of an underwater image was in the blue and green channels [18]. Due to red light attenuates faster than blue and green light in water, the red channel contains less information than the other two channels. The UDCP-based method achieved better estimates of the transmission map of underwater images than DCP. C. O. Ancuti et al. proposed a colour balance and fusion based method for underwater image enhancement which can effectively enhance the images taken in shallow water [19].
Given the problems of colour imbalance and lack of clarity, we propose an adaptive colour restoration and detail retentionbased method for image enhancement. For colour restoration, combined with Reinhard-based colour transfer [20], we propose an adaptive colour restoration method by analysing the colour distribution of degraded images and adjusting the colour distribution centre adaptively in Lab colour space. Then, the colour restoration result is processed by MuGIF based detail enhanced method. Compared with the existing CNN-based models, the proposed method has low complexity and does not require a large amount of data for training. Also, different types of degraded images can be processed by adjusting a few parameters in the proposed method.
The contributions of this paper can be summarized as follows: (i) We present an adaptive colour restoration method which can effectively improve the colour distorted problem adaptively. By analysing the colour distribution and adjusting the colour distribution centre in Lab colour space, the colour correction result can be quickly obtained. (ii) A MuGIF based detail retention method is proposed. By enhancing the intensity component of the colour correction result, the visual contrast can be improved without changing the colour information. (iii) We conduct extensive experiments on different types of degraded images to verify the effectiveness of the proposed method. Experimental results show that the proposed method outperforms the compared methods in terms of colour correction and detail enhancement.
The rest of this paper is summarized as follows. Section 2 briefly introduces the motivation and preliminary. Section 3 presents the proposed enhancement method. Section 4 reports the experimental results and analysis. Conclusions are summarized in Section 5.

MOTIVATION AND PRELIMINARY
The widely used imaging model in computer vision and computer graphics of haze image can be described as [15,21,22]: where I is the observed intensity, J is the scene radiance, A is the atmospheric light, and t (x) is the scattering function which can be expressed as where is the scattering coefficient. The colour distortions of degraded image increased with the scene depth d (x). Similarly, one common underwater imaging model derives from the Jaffe-McGlamery model as [23,24]: where c represents a colour channel. The traditional colour correction methods can be divided into model-based and imagebased algorithms. The image-based methods estimate the transmission map directly from the degraded image, then the haze removal and colour correction can be taken by The key of image-based methods is how to estimate the transmission map accurately. Model-based methods recover image directly by modelling the degradation of image process. Recently, convolutional neural networks (CNN) and generative adversarial networks (GAN) based models have been widely concerned for image enhancement tasks [25][26][27]. Due to the excellent ability of feature extraction, CNN-based models have achieved impressive progress in image fusion [28,29], detection [30], and classification [31] tasks. Many CNN-based methods can effectively extract the gradient map and the estimated feature map to reconstruct the enhanced result. However, the performance of CNN-based enhancement model depends on the training dataset and hyperparameter setting. Generally, these models have low generalization ability when dealing with other types of degraded images.
Image colour can be represented by the distribution of colour channels in different colour spaces. In RGB colour space, these three channels are relevant, which means changing the colour distribution of one channel will affect the overall colour information. For this reason, the processing of colour correction is needed in an uncorrelated colour space. The lab colour space was defined by the International Commission on Illumination (CIE) [32]. And the Lab colour space was designed to be perceptually consistent with human vision, which means that the same numerical change in these colour channels corresponds to the approximately same amount of visually perceived change.
The colour restoration part of the proposed method is carried out in the Lab colour space.
Detail retention as an active research filed in image processing and computer vision is often pre-processing for other tasks. Many methods have been proposed for detail enhancement and detail detection in image processing. For instance, the difference of Gaussian (DoG) based detail enhancement method has been widely used in computer vision tasks [33]. Guided image filter (GIF) [34] is another effective tool in image processing aiming to edge-preserving and denoising. The simplified guided filter can be described as: where I is the guide image, p is the input image need to be processed, q is the obtained result. w is the weight which can be calculated by: where k is the mean value in local window k , k is the variance of pixels within k . Based on GIF, many improved models are proposed, such as self-guided filter (SGF) [35], weighted guided image filter (WGF) [36] and mutually guided image filter (MuGIF) [37]. Due to the performance of MuGIF in edge-aware smoothing, HDR compression and image feathering, MuGIF based detail retention method is implemented in the proposed method.

PROPOSED METHOD
In this section, we propose an adaptive colour restoration and detail retention-based method for image enhancement. As shown in Figure 2, the proposed method has two main stages, colour restoration and detail retention. First, the colour of degraded image is restored by the proposed adaptive colour restoration method. Next, to further improve the contrast, the MuGIF based detail retention method is applied to enhance the details in the second stage.

Colour restoration
The images captured under the weather conditions of low-light, fog, rain and snow often face the problem of colour distortion. Therefore, to improve colour information is an important part of image enhancement. Here we propose an adaptive colour restoration method according to the analysis of colour distribution of degraded image. As mentioned above, Lab is a channel irrelevant colour space, in which the intensity and colour information can be processed separately. Lab colour space is defined by the International commission on Illumination in 1976. Lab colour space was designed to be Colour image can be converted from RGB to Lab colour space by: where where M and N are the size of image, P a and P b are the values of pixel (i, j ) in a and b channel respectively. The values of a and b represent the overall colour information. Then the degree of deviation between each pixel and the mean value in a and b channel can be computed by: The mean square value of colour distribution in a and b channel coordinate space can be calculated as: The colour distortion coefficient can be obtained by: According to the value of , the colour restored image can be obtained. The corrected channel a r and b r of colour restored result can be computed by: where T is the colour distortion threshold, 1 and 2 are the adjusting weights of colour channel. In our experiments, the best results are obtained near T = 0.1, 1 = 1.05 and 2 = 0.95.

Detail retention
Guided image filter technology aiming to edge-preserving and denoising has been widely developed in recently years, among which MuGIF is a very flexible tool being able to work in various modes including dynamic (self-guided), static/dynamic (reference-guided) and dynamic/dynamic (mutually guided) modes. Given two images T and R are with the same size, the relative structure of T with respect to R is defined as: here (T, R) measures the structure discrepancy of T with respect to R. For edge region in R, the penalty |∇ d T i | is small and turns to be large in flat region. The final numerical solution of MuGIF can be represented as: where t , r , t , r are the non-negative constants which are used to balance the corresponding terms. ‖⋅‖ 2 represents the 2 norm, which is used to avoid the trivial solution through constraining T and R not to widely deviate from the input T 0 and R 0 , respectively. The filtered image T can be obtained by solving: where (k) denotes the kth iteration. Q d , P d , D d are the initialization parameters which base on the input image. The details of MuGIF can be found in [37]. The performance of MuGIF is determined by t ∕ t and r ∕ r . The L channel of degraded image L s as the input image R, the filtered image T(L s ) can be obtained by Equation (19). Then the detail map can be computed by: And the detail retention result can be obtained by: Finally, the enhanced result can be constructed by applying inverse the obtained [L r , a r , b r ].
To illustrate the effectiveness of the detail retention method proposed in this paper, a set of experimental results are shown in Figure 4. Figure 4(a) is taken from TID2013 image dataset, and Figure 4(b) is the filtered result by Equation (19), from which can be seen that the filtered result retained the main structural information. The obtained detailed map (Pseudo-colour image) is shown in Figure 4(c), which can be seen that the detail texture information can be extracted effectively by Equation ( (20). The final enhanced result is shown in Figure 4(d). It has higher contrast than the source input image, which shows that the proposed method can effectively improves the visual effect.
In our experiment, the self-guided model of MuGIF is used for detail map extraction. Firstly, we set r = 0. Then the evaluation metric USIM is used to measure the detail enhancement performance with different t . Details about USIM is described in Section 4. From Figure 5 it can be seen that the best detail extraction result obtained with t = 4.6.

EXPERIMENTS AND ANALYSIS
To evaluate the effectiveness of the proposed enhancement method, different types of degraded images are tested in our experiment. Some typical and effective enhancement methods are also tested for comparison, including SSR [13], MSRCR [14], DCP [15], CBF [19], RRM [16] and SDD [17], among which, SSR and MSRCR are retinex-based enhancement methods for colour image enhancement tasks, DCP is the typical haze removing method, CBF is a colour balance and fusion based method for underwater image enhancement tasks, RRM and SDD are the effective algorithms for low-light image enhancement tasks in recent years. All the simulations are performed under the MATLAB R2018a, on a PC with Intel I7, 8G RAM and 1.8 GHz CPU.

Visual comparison
The   Figure 7, the enhancement results obtained by our method have advantages in colour restoration. From the red rectangular region, the proposed method can enhance more detail information. Figure 8 shows a similar visual effect to Figure 7. Compared with the source degraded image in Figure 8, all the visual effects of the enhanced results are improved. SSR and MSRCR based methods can effectively improve the contrast, while RRM and SDD based methods have few advantages for haze removing task. The contrast of images obtained by RRM and SDD are lower than the proposed method. Figures 9 and 10 show the enhancement results of the third and fourth degraded images. Figure 10(f, g) have better colour saturation than other enhanced results. This is due to the fact that RRM and SDD are proposed for low-light image enhancement task, while the fourth haze image has the characteristics of low-light image at the same time. For haze removing task, the proposed method shows advantages in colour restoration, detail retention and contrast improvement.

Evaluation on low-light image
To verify the performance of the proposed method on lowlight image enhancement task, two low-light images are tested in our experiments, and the results are given in Figures 11  and 12. It can be seen that RRM, SDD and the proposed However, compared with our results, colour oversaturation occurs in RRM and SDD based results. Similar to the previous observations, the proposed method has advantages in detail retention as shown in the red rectangular region. More details and textures are preserved and enhanced in our result.

Evaluation on underwater image
As we know, underwater image has more serious degradation problems than haze images and low-light images. Images captured underwater usually suffer from poor resolution, colour imbalance and blurring. It can be seen that colour distortion and lack of contrast occur in Figures 13(a) and 14(a). Figure 10 shows that the proposed method has achieved better visual effects and is superior to the compared enhancement algorithms in colour restoration and detail enhancement. Compared with Figure 13(a), Figure 14(a) has more serious colour distortion. DCP, RRM and SDD based methods fail in colour restoration. From Figures 13 and 14, it can be seen that the proposed method achieved better visual effects in underwater image enhancement task, which shows that the proposed method has more extensive application in computer vision tasks.

Quantitative comparison
To better verify the performance of the proposed method, the objective performance is quantitatively measured by UCIQE [38] and UIQM [39]. UCIQE metric quantifies the non-uniform colour cast, blurring and low-contrast problem in degraded image, while UIQM is a human visual system-based image quality metric to measure the colourfulness, sharpness and contrast features.
The image quality evaluation metric UCIQE is defined as: where c is the standard deviation of chroma, con l is the contrast of luminance and s is the average of saturation, c 1 , c 2 and c 3 are the weighted coefficients.   (23) in which, the colourfulness, sharpness, and contrast measure are linearly combined together. UICM is used to measure the overall colourfulness, which is defined by:  And UISM is defined as: where EME = 2 k 1 k 2 ( I max,k,l I min,k,l ) , measure the contrast information and the sharpness of edges. UICoM is used to measure the contrast degradation. Details can be found in [38] and [39]. From Equations (22) and (23), it can be seen that these two metrics can effectively measure the colour and contrast information. The higher the values of these two metrics, the better the colour, edge detail and contrast of the image. Table 1 reports the quantitative scores of each method on the test images. It can be observed that the proposed method achieves the best scores in both metrics, and gives the second-best scores in some cases. The SSR and MSRCR based methods give the second-best scores on the eighth image, which is consistent with the visual results. The SDD based method gives the second-best scores on the fifth and sixth images, which means that this method performs well in low-light image enhancement tasks. It can be concluded from Table 1 that the overall performance of the proposed method on haze image, low-light image and underwater image enhancement tasks is better than the compared algorithms, and also shows the advantages both in visual perception and quantitative assessment.

CONCLUSION
An adaptive colour restoration and detail retention-based method for image enhancement tasks is proposed in this paper.
The proposed method has two main stages, colour restoration and detail retention. For colour restoration, combined with the Reinhard-based colour transfer theory, we proposed an adaptive colour restoration method by analysing the colour distortion coefficient. Next, the details of colour restored results are extracted by MuGIF for detail retention. The experimental results demonstrated that the proposed method outperforms the compared methods in visual effects, quantitative assessment, and the application prospects in haze image, low-light image and underwater image enhancement tasks. Next, how to adjust the parameters in colour restoration adaptively to make it more suitable for different types of distorted images will be considered in our future work.