Low-light image enhancement based on exponential Retinex variational model

Aiming at the problems of residual noise, low contrast, and limited detail information caused by low-light images, this paper proposes a new Retinex variational model. According to Retinex theory, it is necessary to estimate the illumination and reﬂectance components decomposed from the original image. In order to better maintain the edge information, texture richness, and prevent artefacts, the exponential forms of local variation deviation and total variation are used as illumination prior and reﬂectance prior, respectively, and mixed norms are used to constrain them, so as to deal with the illumination information and texture details of the image more effectively, and then use the bright channel prior to improve the colour reproduction sense of the original image, thereby constructing the objective function, and ﬁnally using the alternating iterative optimization method to ﬁnd the optimal solution to the proposed model. Experiments show that compared with other existing image enhancement methods, the method proposed here improves the contrast of the image, overcomes the phenomenon of halo artefacts and colour distortion, is more consistent with human vision, and produces better results in terms of quantitative performance.


INTRODUCTION
Due to the influence of insufficient light such as night, cloudy weather, rain and fog, and backlight shooting, the collected images are prone to problems such as low dynamic range, loss of detail, and serious noise, which cause the degradation of image quality and limit the performance of computer vision system [1][2][3]. In order to be able to provide high-quality images for subsequent image analysis and processing, image enhancement of low-light images is required.
With the development of digital image processing technology, many low-light image enhancement algorithms have emerged. Based on the spatial domain, it is a relatively simple and easy to understand algorithm. Commonly used methods include histogram equalization [4], linear spatial filtering, and non-linear spatial filtering. Histogram equalization methods are mainly divided into two categories: global histogram equalization methods and local histogram equalization methods. Global The physical model-based approach is based on the property that each pixel in the original low-light image is inverted to resemble the characteristics of a foggy image [9]. The idea of defogging is adopted to enhance low-light images. Tang et al. [10] proposed a method to reduce the strong light area, suppress the halo, and improve the non-local denoising method to enhance the image, but it results in over-saturation. The physical lighting model for low-light image degradation proposed by Yu et al. [11] uses a Gaussian surrounding function and iterative estimation of the model parameters to restore low-light images and reproduce the colour consistency of the image, but cannot enhance local detail areas. Xiao et al. [12] used guided filtering and median filtering to estimate atmospheric light and obtain edge information, and then used gamma correction to enhance the image brightness. This method is susceptible to uneven ambient light.
Low-light image enhancement algorithms based on machine learning have entered a period of rapid development due to the advent of the era of big data. Kin et al. [13] proposed the LLNet framework, which used the stacked sparse denoising autoencoder (SSDA) method to identify the signal characteristics of the low-light original image, and used two modules for contrast enhancement and denoising learning, but the encoder has limited processing power and is only suitable for smallsized images. K. Zhang et al. [14] trained a convolutional neural network (CNN) denoiser and integrated it into a model-based optimization method to deal with the inverse problem in lowlevel vision; the algorithm can produce good Gaussian denoising, but its robustness is poor. Tai et al. [15] proposed a deep recursive residual network (DRRN) approach, with a very deep network architecture and both global and local approaches to residual learning in order to alleviate the difficulty of training the network; however, the computational cost is high, a large amount of training data is required, and reconstruction artefacts are prone to occur. Dong et al. [16] proposed a single-image super-resolution (SR) deep learning method, which significantly improves the speed and performance; however, the method is less flexible in dealing with denoising and de-blurring.
The Retinex-based method is an application based on the simulation of the human visual system. The Retinex theory was proposed by Land [17] in 1971, and then Land proposed a path comparison method, but the path selection and starting position are uncertain, which affects the estimation of illumination, and the computational complexity is high. Later, the singlescale Retinex (SSR) algorithm, the multi-scale Retinex (MSR) algorithm, and the multi-scale Retinex with colour restoration (MSRCR) based on centre-surround appeared one after another, but the image enhancement effect still has problems, such as the appearance of halos and unnatural colours.
A new variational model based on Retinex theory is proposed. The original low-light image is decomposed into reflectance and illumination. For the reflectance and illumination regularization terms, the exponential form of total variation (TV) and exponential form of local variation deviation (LVD) methods are, respectively, applied, and the bright channel prior is added. Finally, an alternating iterative optimization algorithm is used to solve the model. As shown in Figure 1, an example of using this method for low-light image enhancement is given. It can be seen that the original low-light image has low brightness and can give people less information. The enhanced image can meet the basic needs of the human eye and computer vision system. The illumination image and the reflectance image effectively reflect the image Illumination spatial distribution and texture information. The experimental results demonstrate that the method enhances the brightness and contrast of the original low-light image, while improving the detailed information of the image and maintaining the original naturalness of the image. From the subjective and objective analysis, the effect is better than the existing methods.

Retinex model
The Retinex model decomposes an image into illumination and reflectance components. The relationship between the two is shown in the following formula: where I denotes the original image captured by the human eye or camera, L is the illumination component reflecting the intensity of ambient light, R is the reflectance component reflecting detailed information about the object itself, and '.' denotes pixel-level multiplication, which expresses the product relationship between the illumination and reflectance components. The main idea of Retinex theory is to estimate and remove the illumination component to obtain the reflectance component, and get the image closest to the ground truth. From a mathematical perspective, this is actually an ill-conditioned problem. The traditional low-light image enhancement algorithms based on Retinex theory mentioned in the introduction all have halo artefacts. Therefore, in 2003, Kimmel et al. [18] first proposed a Retinex variational model and used the prior knowledge of reflectance images and Bayesian theory to add a penalty term to better deal with the situation of uneven lighting. Zhang et al. [19] proposed a global sparse gradient guided variational retinex model based on variational Retinex, which uses global sparse regularization to estimate the gradient field of the illumination image, and because of the assumption that the illumination is smooth, the guide vector field term is introduced. This algorithm can effectively improve the image enhancement effect. In the Retinex variational model proposed by Jin et al. [20], the piece-wise constant of reflectance is taken into account, and TV regularization term is introduced, which helps to reduce image noise and improve image quality. Febin et al. [21] proposed another Retinex variational model, which uses a non-local frame to represent reflectance and intensity changes to preserve more texture details and boundary regions. The model is based on perception and is more efficient in enhancing and denoising the remote sensing image data. The above algorithm can achieve good low-light image enhancement compared to the traditional Retinex method, but there is still some room for improvement.
In the existing Retinex-based variational models, the problem of estimating the illumination component and the reflectance component is transformed into a secondary optimization problem with a closed solution. The objective function of the previous method can be reduced to the following formula: The first of these terms is the fidelity term, which is a squared error term for calculating the new image versus the original image in terms of illumination and reflectance. Based on the two assumptions that reflectance is a piece-wise constant and illumination is spatially smooth, the second term proposed represents the spatial regularization term of the reflectance component R, and the third term represents the spatial regularization term of the illumination component L. The second and third terms reflect a priori information about the image to be reconstructed. According to some existing algorithms in low-light image enhancement based on the Retinex variational model, L 1 and L 2 norms [22][23][24] etc. can be used to constrain the illumination and reflectance components for more effective estimation.

Structure and texture separation
According to Retinex theory, the original image can be viewed as a composition of reflectance component and illumination component, while reflectance captures the texture details in the image, illumination captures the structure in the image and expresses the main properties of the image. Therefore, the original image can be decomposed into a texture layer and a structural layer [25]. The texture layer can reflect the details of the image, such as wood grain and leaf veins, so that the image meets the human eye visual realism, and the structural layer reflects the image to express the information of the illumination direction, such as the image of the greyscale value and colour composition. Structural texture decomposition is to preserve the image edge and texture while maintaining the structural characteristics, and most of the previous methods are based on assumptions about the characteristics of the structure and texture. For example, Xu et al. [26] assumed that the texture has the characteristic of 'oscillation' on a small scale and used a joint bilateral filter to decompose the structure and texture. Belyaev et al. [27] is based on the TV model and considers the curvature change on the level set in the weight of the regularization term to maintain the weak edge structure. The TV model was first proposed by Rudin et al. [28] in 1992. It estimates the texture degree, reduces the excessive smoothness of the image edge, and has a good effect on noise suppression. Bolun et al. [29] proposed a method to maintain the LVD of the structure, and obtained the variance correlation of the gradient change, which can be used as the basis for the division of structure and texture according to the strength of the correlation, where structure tends to be more strongly correlated. In the theory of the Retinex model, changes in reflectance can lead to large gradients and changes in FIGURE 2 Comparison of the images obtained without using the exponential form and those obtained with using the exponential form: (a), (c), and (e) are the enhanced, illumination, and reflectance image without exponent; (b), (d), and (f) are the enhanced, illumination, and reflectance image with exponent, respectively illumination lead to smaller gradients. Thus, in dealing with the estimation of structure and texture, Xu et al. [30] introduced the exponential form of the local derivative, where the exponential form of TV can be written as |∇L| ; the exponential form of where Ω is a local patch of size r × r (we set r = 3), and in [30], it is also proposed that the structural edges of the image are better extracted when the exponent is greater than 1, and the texture details of the image are better extracted when the exponent is less than 1. As can be seen in Figure 2, there is a visual difference between the enhanced image without using the exponential form and using the exponential parameters proposed here, in the second row, we can see that using the exponential form of the illumination image extracts the structural part of the low-light image more precisely, smoothing the light space more and avoiding unnecessary texture information; while in the third row, using the exponential form of the reflectance image extracts the texture of clouds and buildings more completely. Therefore, in the first row, we can see that the overall tonality of the enhanced image without the exponential form gives a flatter feeling, while the enhanced image with the exponential form gives a stronger contrast and a richer sense of hierarchy to the cloud area of the sky and building area. It can be concluded that the appropriate exponential parameters can improve the contrast and naturalness of the image. We will specifically analyse the effect of the exponential parameter size in subsequent experiments to select the best value.

Bright channel prior
The dark channel prior is mostly used for the algorithm of image dehazing [31][32][33][34]. First, the model of a fogged image is as follows: where I(x) is the luminance value at pixel x in the original fog image, J(x) is the luminance value of the fog-free image to be recovered, t(x) is the transmittance, and A is the atmospheric light.
In the foggy area, the brightness of the dark channel is higher. The value of the dark channel can be used to describe the concentration of the fog. The dark channel prior can be used to estimate the value of the transmittance t(x) in the fog imaging model for image restoration. The calculation formula is as follows: where J dark (x) represents the dark channel image, J c (y) represents one of the three colour channels of the image, and Ω(x) is the neighbourhood centred on the pixel x.
According to the dark channel prior, some pixel points must exist in the outdoor non-sky area captured in good fog-free weather. In a colour channel of red, green, blue (RGB), its intensity value is very low and tends to zero, that is, J dark (x) → 0.
The bright channel prior [35] is a new thinking inspired by the dark channel prior, which refers to the maximum intensity value of some pixels in a small area of a well-exposed image that is always close to 1. By analogy with the dark channel, the bright channel of an image J can be defined in the following way: where J bright (x) represents the bright channel image, J c (y) represents one of the three colour channels of the image, and Ω(x) is the neighbourhood centred on x. The value of J bright (x) can be used to determine whether an image is well exposed or not, and the bright channel prior can be used to adjust the local exposure of the image [36] to recover the detail of a well-exposed image, which we can apply to the enhancement of low-light images.

Proposed model
Based on the discussion and analysis in the related work in Section 2, we propose a new variational Retinex model, as shown in the following formula: where l , r , and b are positive parameters, ∇ represents the gradient operator in the horizontal direction (x) and vertical direction (y), the first term is a data fidelity term that minimizes the distance between the R and L product and the observed image I as close as possible. The second term is the illumination regularization term, which guarantees the spatial smoothness of illumination. The third term is the reflectance regularization term, which guarantees the segmental smoothness of the reflectance. The fourth term is the bright channel prior, which is used to constrain the distance between the illuminance and the bright channel of the image, where we define I B as I B = max c∈{r,g,b} (max y∈Ω(x) (I c (y))), and when we calculate the bright channel of the input low-light image, we use the guided filter [37,38], and refine the edges to prevent texture distortion and edge degradation.
The L 1 norm is currently a more widely used regularization term. Its shape is very sharp and has sparse features. It is easy to directly set unimportant features to zero. The L 1 norm has a certain range of application. The calculation efficiency on sparse vectors is low, and data with larger values is prone to biased estimates, and in some cases the desired approximate results cannot be obtained. Compared with the L 1 norm, the L p norm (0 < p < 1) has a better estimation effect for larger values in the sparse solution, but the L p regularization problem itself is nonconvex and non-smooth, and there are many ways to solve it, such as the K-nearest neighbour-based prediction scheme mentioned in [39], but most of them are slightly more complex. The L 2 norm has a more severe penalty for larger values, so that it can be estimated more effectively, and a model optimization problem with smaller parameters can be constructed. It can be applied to multiple types of datasets, reduce data migration, and enhance anti-disturbance capability.
In recent years, it has also been found that mixing norms to form an objective function with several regularization terms gives better results than using a single norm as a regularization term. Therefore, we rewrite the approach of using norms, we use the L 2 norm to constrain reflectance and the L p norm to constrain illumination. (Figure 3 is a comparison between the L 2 norm and the mixed regularization term of the L 2 norm and L p norm. From the illumination and reflectance images in the third and fourth columns, we can find that the algorithm using the mixed regularization term can better capture the illumination and texture details, make the contrast between light and shadow at the edges more distinct, and make the objective objects in the enhanced images more three-dimensional, for example, the jointed part of the hand of the statue is portrayed more in line with the human sensory system.) For the non-convexity and non-smoothness due to the L p norm, we use the iteratively reweighted least squares (IRLS) method [40] to solve the problem. It is an effective linear least approximation method. In addition, FIGURE 3 Comparison of images obtained by using only L 2 norm and by using a mixture of L 2 norm and L p norm regularization term: (a) original low-light image; (b)-(d) enhancement, illumination, and reflectance with L 2 norm constrained illumination and reflectance; (e)-(g) enhancement, illumination, and reflectance with L 2 norm and L p norm constrained we use exponential LVD and TV as the illumination prior and reflectance prior to maintain the smoothness of the structure and the details of the texture. The formulas of the second and third terms are defined as follows: Then let B(∇R) = v‖∇R‖ 2 2 ‖ ⋅ ‖ 2 represents the L 2 norm, and ‖ ⋅ ‖ p represents the L p norm, L and R denote their respective exponential parameters. In the second term for the proposed variational model, since IRLS will be used to solve the problem of minimizing the L p norm [41,42], it is similar to the treatment in the paper [43], where the L p norm is replaced by the L 2 norm to obtain ‖∇L‖ p p = (∇L) p−2 ‖∇L‖ 2 2 ; after that, to avoid zero denominator at the fraction, we add to deform it and obtain ‖∇L‖ ∑ Ω ∇L| L + ) p−2 to facilitate the representation of subsequent optimization calculations, where the range of p is 0 < p < 1, and this paper sets p = 0.4. Similarly, in the third term, we adopt the conventional L 2 norm and introduce the TV in exponential form to define the final reflectance regularization term, obtaining B(∇R) = (|∇R| R + ) −1 ‖∇R‖ 2 2 , again, for ease of representation in the later optimization, denoted v = (|∇R| R + ) −1 .

Optimization
We organize the model and related definitions proposed in the previous section and simplify them to obtain: Due to the existence of two unknowns, the traditional calculation method no longer works, and in order to better solve for the illumination and reflectance components of the model, we use an alternating minimization algorithm for optimization calculations, and decompose the objective function with two variables into two sub-objective functions, that is, iterative loops of two independent sub-problems, updating both the illumination and reflectance variables. Setting initial values L 0 = I, R 0 = I∕L 1 , and having the following formula for the kth iteration: We can find that the above two sub-problems have closed-form global optimal solutions. The specific algorithm is as follows: (1) We rewrite the objective function into a matrix form: where D x and D y denote the Toeplitz matrix of the discrete gradient operator, U x and U y denote the diagonal matrix containing the weights u x and u y . We take the derivative of this formula and make its derivative equal to zero, and the solution of L is: where E denotes the unit matrix.
(2) After we obtain the solution of L, we can similarly obtain the solution of R as follows: We repeat the above algorithm to update the variables L and R until The higher the number of iterations, the less texture detail is lost in the original low-light image and the better it captures the structure of the object, which stabilizes the resulting image after a certain number of iterations. We set the two parameter values to = 0.01 and K= 25.

Illumination adjustment
Since the reflectance obtained by Retinex is prone to overenhancement of low-light images, we generally adjust the illumination to improve the overall visual effect of the image. For this, this paper adopts the gamma correction [49] method to modify the previously estimated illumination, and its basic form is as follows: where is the parameter of the gamma correction, we refer to [29] and set its value to 2.2, G(L) represents the value of the corrected pixel, W= 255 and L represents the greyscale value of the current pixel. The final enhanced image is calculated by I ′ = R ⋅ G (L). And, when we process the colour image, we convert the RGB space to the hue, saturation, value (HSV) space and process it in the V layer channel, and after we get the result, we convert it back to RGB space. The calculation in HSV channels improves the realism of the image and reduces the complexity of calculation than processing on each channel of RGB.

Importance of the bright channel prior
The fourth term in the variational model proposed here is the bright channel prior. The reason for adding this term is that it can suppress the pursuit of colour brightness in the algorithm and coordinate the balance between lightness and darkness and human eye visual perception. Specifically, we compared this with some experiments and enlarged the details. As shown in Figure 4, the image in the second column is the effect of setting b = 0, and the image in the third column is the algorithm of this paper.
Although the overall brightness of the enhanced image is higher without the bright channel prior, it is likely to cause some problems. For example, in the second column, the typical overexposure phenomenon can be seen at the top of the building in the first image; in the second image, because the image brightness is adjusted too light, the brightness of the shadow brought by the incident light is also too shallow, which lacks the realism of the light source and reduces the three-dimensional sense of the object in the image; in the third image, when processing the brightness of the black eaves and the white wall at the eaves of the house, it caused the phenomenon of edge artefacts.

Effect of relevant parameters
Since in the two components of the Retinex decomposition, the illumination captures the structure of the object, in order to better observe the effect of the exponential parameter in the  Figure 5, we set r = 0, b = 0, and each row lists a different original low-light image and the corresponding illumination image. The first column is the original low-light image, and from the second to the last column, we set different exponential parameters for com-parison in order to choose the best exponential parameter, the values are 1, 1.1, 1.3, 1.5, 1.7, and 1.9. When the exponent value is 1.3, it can be seen that the best illumination image is obtained, it can capture the structure of the object well and avoid the detail information of the texture. Ref. [30] pointed out that both TV and LVD can be used to extract the structure of the image, and we compare the two. Therefore, we first set the exponential value in the illumination regularization term to 1.3. Similarly, set r = 0, b = 0. As shown in Figure 6, the first column is the original low-light image, the second and third columns are listed using the exponential form of TV and the exponential form of LVD as an illumination prior to obtain the illumination layer. Comparing the illumination layer of the same original low-light image, it can be seen that the structure of the illumination layer using the exponential form of LVD as the light regularization term is more spatially smooth and more advantageous for subsequent image enhancement.
To further prove that the LVD in exponential form at the illumination regularization term has a positive effect on the enhanced images, we performed comparison experiments. As shown in Figure 7, the first column is the original low-light image, the second column is the enhanced image obtained by using the exponential form of TV as the illumination prior, the third column is the enhanced image obtained by using the FIGURE 7 Comparison of enhanced and reflectance images using local variation deviation (LVD) and total variation (TV) in exponential form: (a) original low-light image, (b) and (d) enhanced image and reflectance image obtained using TV in exponential form, respectively, (c) and (e) enhanced image and reflectance image obtained using LVD in exponential form, respectively exponential form of LVD as the illumination prior, the fourth column and the fifth column are the corresponding reflectance layer images. The purpose of displaying them is the same as the enlarged images in the second and fourth rows to make a clearer detail comparison. It can be seen that the enhanced images using the exponential form of TV as the illumination prior appear anomalous at the sharp edges of both images, showing some jaggedness, while the enhanced image obtained by using the exponential form of LVD as the illumination prior retains the characteristics of the original image at the edges.
For the selection of the values of parameters l , r , and b in the proposed new variational model, we first determine the optional range of parameters based on empirical values, and then make a lot of experiments to test them and judge the degree of influence of each in order to select the best parameter values. Examples are posted as in Figure 8 to show the different effects brought about by the change in parameter values. From top to bottom, when the first column sets l = 0.01 and b = 0.15, the value of r is 1, 0.1, 0.01, 0.001, 0.0001, and 0.00001, respectively, and it can be seen that as the values become smaller and smaller, the enhanced image becomes clearer and clearer, and the human eye is able to obtain more and more detailed information, and finally tends to stabilize. The second column makes r = 0.00001, b = 0.15, and the control range of l value is also from 1 to 0.00001, and it can be found that with the decrease of this parameter, from the poor treatment of local brightness to the transformation of the overall brightness distribution evenly, the enhanced image has a better and better grasp of the light source and is more in line with the visual requirements of the human eye. Especially from the part selected by the frame in the figure, it can be clearly seen that the ceramic cup is too bright white from the whole body to presenting a natural clear state. In the third column of l = 0.001, r = 0.0001, so that the values of b are 10, 1, 0.1, 0.01, 0.001, 0.0001, respec-tively, we can find that the adjustment of the value has a small impact on the effect of the enhanced image, when the value is smaller, the contrast of the image slowly becomes larger, and the three-dimensional sense of local areas is slightly enhanced. At last, we integrated the test results and the matching degree between each parameter, and finalized the parameter value as: l = 0.001, r = 0.0001, b = 0.15.

Illumination and reflectance estimation
According to the Retinex theory, the original image is decomposed into the illumination component and the reflectance component. We mainly compare the method of [29,43,45] with the illumination and reflectance components of the method here, as shown in Figure 9. The first column is the original lowlight image, the second column to the last column are from [29,43,45] and the method proposed here, where the first, third, and fifth rows are the illumination images, the second, fourth, and sixth rows are the reflectance images. It can be seen from Figure 9 that the method proposed here finds a balance between capturing structure and texture. The estimated illumination is more spatially smooth, and the estimated reflectance captures more basic details, and the treatment is more natural and clearer at the edges, such as at the clouds in the figure. Also, as a side effect, the method used here for the regularization term of the illumination and reflectance has significant advantages.

Algorithm comparison
The enhanced low-light images are compared overall with more advanced methods in recent years. As shown in Figure 10, from left to right are the original low-light images, the structure and  [43], (c) illumination and reflectance image by [29], (d) illumination and reflectance image by [45], (e) illumination and reflectance image by our method  texture aware retinex model (STAR) model method of [30], the weighted variational model (WVM) method of [45], the robust method of [48], the hybrid L 2 − L p layer separation model method of [43], and the method of this paper. It can be seen that the method in [30] is prone to over-enhancement during image enhancement, causing colour distortion of the overall image and resulting in visual unnaturalness. The method in [45] destroys illumination information, resulting in a slightly under-enhanced image that lacks the three-dimensional sense of an object, such as the purple clouds that appear in the image, which are visually too flat. The method in [48] presents over-enhancement, with high colour saturation and reduced clarity and insufficient texture detail during the enhancement process, resulting in a severe loss of local image information, such as at the petals of the flower and the veins of the leaf. The method in [43] is more consistent with the human eye's visual requirements in terms of colour, and the overall enhancement is better. However, edge artefacts are prone to appear in some parts of the image. For example, in an image with sunset clouds, this phenomenon occurs in a partial area of purple mixed with dark orange. It is also prone to unnatural light intensity transitions, such as green leaves that are leaking light, which form an obvious line visually. The method proposed here makes a balance between processing texture details and brightness, improves the contrast of the image, maintains the colour reproduction of the image, and brings the naturalness and realism of the image, with overall better subjective visual effects for the low-light image enhancement.
We have also made quantitative evaluations between the method proposed here and other methods, but since we are augmenting low-light images and the ground truth is usually unknown, instead of using some commonly used fullreference image quality evaluation methods, such as SSIM [50], we chose Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) and the Natural Image Quality Evaluator (NIQE), two blind image quality evaluation algorithms, to be used as objective evaluation metrics. The BRISQUE [51] algorithm first extracts mean subtracted contrast normalized (MSCN) coefficients, then calculates the feature vector, and finally uses a support vector machine to perform regression. The lower the image quality score obtained, the better the image quality. We randomly select images from the collected dataset for quality evaluation and take the average. As can be seen from Table 1, the method proposed here does have numerical advantages, only in the second column where the value is larger than [30], but it is also the second smallest value. Combined with the subjective evaluation of image quality, for some images, the method of [30] has a lower value in the BRISQUE evaluation metrics, but it has a significant disadvantage in colour reproduction. The NIQE [52] algorithm extracts the statistical features of distortion-free images of natural scenes, and the lower the quality score obtained, the better. We randomly selected images for quality assessment in the same way, and as we can see in Table 2, the scores of the method proposed here are still low. BRISQUE is Blind/Referenceless Image Spatial QUality Evaluator. NIQE is Natural image quality evaluator. The smaller the value of BRISQUE anb NIQE, the better the result. The bold terms in Tables 1 and 2 is the minimum value.

APPLICATION
The Retinex variational model based on the exponential form proposed here can be effectively used in computer vision, which  can greatly improve the quality of low-light images taken at night and on cloudy days, and also make it easier to obtain key information in them in order to improve the accuracy of tasks such as target detection and recognition in computer vision. As shown in Figure 11, this is the result of face detection using the Betaface platform with the low-light image and the enhanced image uploaded. The original low-light image is from the ExDark dataset, and we are able to find that the face is not detected due to the low brightness of the original image, while the enhanced image obtained using the algorithm proposed here is significantly brighter and successfully detects the face. Figure 12 shows the results of license plate recognition based on AI Studio platform. The original low-light image is from Yantai Forum, and it can be found that the key information of the image enhanced by our method is much clearer, and result obtained is easy to recognize and very accurate.

CONCLUSION
An algorithm based on the Retinex variational model in exponential form is proposed to enhance low-light images. The exponential form of LVD is fused to constrain the illumination on the basis of L p norm, and the exponential form of TV is fused to constrain the reflectance on the basis of L 2 norm, and the bright channel prior is added to work together to improve the accuracy of estimation of illumination and reflectance. The results show that the algorithm maintains colour brightness recovery, retains texture detail information, and significantly enhances the visibility and naturalness of objects and scenes in the enhanced image, which is superior compared to existing methods. In addition, in the experiments on a lot of low-light images proposed here, we also found that the enhancement effect on indoor low-light images is not as good as that on outdoor images, and compared with other methods, the method proposed here is not significantly improved. Therefore, the enhancement methods for indoor low-light images need to be improved.