Low‐light image enhancement based on Retinex decomposition and adaptive gamma correction

Funding information National Natural Science Foundation of China, Grant/Award Numbers: 61771339, 61672378 Abstract Low-light images suffer from poor visibility and noise. In this paper, a low-light image enhancement method based on Retinex decomposition is proposed. A pyramid network is first utilized to extract multi-scale features to improve the quality of Retinex decomposition. Then the decomposed illumination is refined via an adaptive Gamma correction network to handle non-uniform illumination, while the decomposed reflectance is refined with a lightweight network. Finally, the enhanced image is obtained by element-wise multiplication between the refined illumination and reflectance components. Quantitative and qualitative experiments demonstrate the superiority of our method over state-of-the-art image enhancement methods.

could enhance the image contrast instead of improving the low light condition.
Another category of complex methods are those based on algorithm to enhance low-light images. The basic idea proposed in [5] can be illustrated as followings, firstly, the original low-light image is converted into a negative image, then researchers utilize a fog suppression algorithm for dehazing the foggy-liked image. The final result is obtained by inverting the images. Along this line of consideration, Wang et al. [6] put further insight into image enhancement, the image is inverted to be processed in noise removal and details preservation. Furthermore, an adaptive enhancement parameter is adopt into the process of dark channel prior dehazing to enlarge contrast and prevent over/under enhancement. Inevitably, these dehazingbased methods bring out unrealistic results owing to the lack of theoretical foundation.
Among these attempts, the Retinex-based methods, which decompose images into reflectance and illumination components based on Retinex theory [13] have attracted much attention. There are two strategies to obtain the enhanced results. The first strategy is directly utilizing the decomposed reflectance as the enhanced result [14][15][16]. However, it is unreasonable to treat the reflectance as the enhanced result. Another strategy is first adjusting the illumination and then obtain the enhanced results by element-wise multiplication of the adjusted illumination and reflectance [8,9,17], which is more accordance with the Retinex theory. However, these methods utilize hand-crafted priors for the under-constrained decomposition problem and may not generate well results for various low light images.
Meanwhile, with the great success of convolutional neural networks (CNNs) in image restoration, many CNN based image enhancement methods are proposed [10][11][12]. Lore et al. [10] proposed stacked-sparse denoising auto-encoder to improve the brightness and remove noise, but the strategy attained limited performance improvements in very dark images.Inspired by CNNs, Shen et al. [11] utilized CNN architecture to accomplish the MSR algorithm, and proposed MSR-net to enhance the low-light image. Along the Retinex decomposition theory, Wei et al. [12] proposed two sub-CNNs to decompose and enhance low-light images, respectively. However, the singlescale decomposition network cannot produce satisfying decomposition results, which further degrades the quality of final enhancement results. The results produced by [12] still exist obvious noise, colour distortion and illumination distortion.
Due to the effectiveness of Retinex theory, we propose a CNN to enhance low-light images. The CNN consists of two stages, which deal with Retinex decomposition, and restoration, respectively. We take paired low/normal images as inputs of Retinex decomposition stage in the training process, where there is no ground-truth illumination and reflectance maps as labels. The estimated illumination maps are then fed into a restoration network to correct illumination, and the produced reflectance maps are also refined in the restoration stage. In Figure 1, the enhanced results of different levels of brightness through our method, which contains extremely low-light, slightly low-light and non-uniform illumination images. Finally, the refined illumination and reflectance maps can restore high quality enhanced results. Our main contributions can be summarised as follows: • We first apply a pyramid decomposition network for the Retinex decomposition in low-light image enhancement. Due to pyramid network, we extract multi-scale and multifrequency feature maps, which significantly promote the decomposition and enhancement performance. • We propose a restoration network to refine the decomposed reflectance and adjust the illumination component. The illumination is adjusted by adaptive gamma correction. Compared with the fixed gamma correction utilized in previous methods [8,17], the pixel-wised adaptive gamma correction is more powerful in dealing with images with non-uniform illuminations. With the refined reflectance and adjusted illumination, we can obtain the enhancement results via pixelwise multiplication. • Experimental results demonstrate that our method can generate enhancement results with less noise, sharper edges, and weaker colour distortion. Our method outperforms several state-of-the-art enhancement methods.
In the following, Section 2 describes the variational Retinex model, deep learning networks and gamma correction as a theoretical background. And we introduce our method in Section 3. The ablation study and comparison results with state-of-the-art methods are shown in Section 4. Section 5 concludes this paper.

RELATED WORK
In this section, we give a brief overview of related works on Retinex-based decomposition methods, deep learning based methods and gamma correction.

Retinex-based methods for enhancement
The Retinex-based methods have attracted much attention. There are two kinds of approaches to obtain the enhanced results. The first kind of approaches is directly utilize the decomposed reflectance as the enhanced result. It is inescapable that the decomposition is a highly ill-posed problem. So some variation based approaches have been proposed that introduce many hand-crafted priors for the under-constrained problem.
To enhance the contrast of a low-light image, Jobson proposed single-scale Retinex (SSR) [14]. They defined the ratio of intensity value at the center to the average intensity values as the reflectance of the low-light image. The illumination component was estimated by Gaussian low-pass filtering. To improve the performance of SSR, multi-scale Retinex (MSR) [15] was proposed. They utilized multiple Gaussian kernels with different standard deviations to get the weighted summation of multiple illumination components, which could avoid the halo effect near edges. Jobson extended their previous method with colour correction (MSRCR) by applying the colour restoration function [18], which could generate optimum enhanced result than the methods mentioned above. These methods directly utilized the decomposed reflectance as the enhanced result.
Many variation approaches usually introduced constraints on priors of both reflectance and illumination components. Kimmel et al. [19] firstly proposed to utilize the variational model to solve this problem, they introduced l 2 to minimise the gradient of illumination component, which also forced the reflectance component values to be smaller than the illumination value. To reduce the amplified noise, Ma et al. [20] introduced anisotropic total variation (TV) prior to constrain the high frequency component which represents the reflectance. Furthermore, a simultaneous reflectance and illumination estimation (SRIE) method was proposed [21]. The illumination and reflectance models were calculated initially in this approach, and the enhanced image was obtained by manipulating the illumination component. Instead of ignoring the colour information, Yue et al. [17] proposed a weighted l1 regularisation by introducing colour similarity constraints. Zhang et al. [22] proposed perceptually bidirectional similarity (PBS) to constrain the exposure, colour and textures to reduce the visual artifacts. To get better decomposition performance for image enhancement, Hao et al. [23] proposed a semi-decoupled method to estimate illumination and reflectance.

Gamma correction for illumination adjustment
In order to improve the brightness, many widely used nonlinear mapping functions have been proposed. Among these mapping functions, gamma function is the most general image enhancement method by controlling the parameter . Xu et al. [24] utilized gamma correction for image enhancement. Their experiment results showed that the gamma correction can increase the image contrast and proved the effectiveness and reliability. Huang et al. [25] proposed an automatic transformation method that improved the brightness through gamma correction and probability distribution of illumination pixels. This kind of gamma correction with weighting distribution could automatically and adaptively enhance the image contrast. With the Retinex theory getting reliable and effective, some researchers extracted the illumination component from low-light image and then applied gamma correction on it [17,21]. Yue et al. [17] proposed a model with proper constraints, and solved it by Split Bregman algorithm to get the illumination component. They adopted traditional gamma correction to adjust the illumination.
Gamma correction is limited, although it is reliable and effective. As shown in Figure 2, the dark areas still exist when is small, and the bright areas lose many details if is large. In nonuniform brightness cases, traditional gamma correction could not be applied properly. So that we propose to learn a gamma matrix G using CNN instead of a fixed gamma parameter .

Deep learining-based methods for enhancement
Low-light image enhancement net (LLNet) [10] aimed at improving the image contrast and removing noise. The training data consisted of the corrupted images and the corresponding ground-truth. The corrupted images were generated by the ground-truth through gamma correction and Gaussian noise addition. They adopted stacked-sparse denoising auto-encoder to improve the performance of brightness enhancement and noise removal. With considering that MSR [11] is an effective strategy for image enhancement, Shen et al. designed a convolutional neural network to directly learn an end-to-end mapping between low-light and enhanced images. The framework was proposed on the assumption that multi-scale Retinex could be regarded as a feedforward convolutional neural network with different Gaussian convolution kernels.Instead of directly learning an image to image mapping, a photo enhancer [26] introduced illumination information intermediately. Moreover, the researchers proposed a new dataset of 3000 expert-retouched input/output image pairs, which was of great significance in improving enhancement performance. To improve the lightness and also remove the noise hidden in dark regions, KinD [27] introduced a flexible illumination adjustment network and a restoration network for low-light image enhancement.
All these attempts mentioned above show the effectiveness of both Retinex-based methods and deep learning-based methods. We propose a CNN to enhance low-light images. The CNN consists of two stages, which deal with Retinex decomposition, and restoration, respectively. The images are decomposed into illumination and reflectance components in Retinex decomposition stage. In restoration stage the estimated illumination and reflectance components are then fed into restoration network, respectively, to be corrected and refined, which can restore enhanced results. The proposed framework for image enhancement, which includes two stages, i.e., Retinex decomposition and image restoration.

THE PROPOSED METHOD
Based on Retinex theory, an image S can be decomposed into reflectance R and illumination L: where • represents the element-wise multiplication. In fact, low-light image enhancement can be seen as a procedure for adjusting illumination component independent reflectance, because it aims to improve the brightness to acquire normal light images and the reflectance represents the original intrinsic appearance of the scene objects. However, directly decomposing an input image into reflectance and illumination components [28][29][30][31] is an ill-posed problem that could induce unrealistic results to some extent [32].
Since Retinex decomposition is a highly under-constrained problem and there is no ground truth labels for the decomposition process, we utilize normal and low light image pairs, which share the same reflectance component to constrain the decomposition process. Hereafter, we further utilize a reflectance refinement sub-network to refine the reflectance and utilize the adaptive gamma estimation network to adjust the illumination. The final enhanced results are obtained via element-wise multiplication between the refined reflectance and illumination, as shown in Figure 3.
Note that, the proposed method is inspired by Retinex-Net [12], and we adopt the same loss function in the decomposition stage as that used in Retinex-Net. However, there are two main differences between our approach and Retinex-Net. First, in the decomposition stage, we utilize a pyramid decomposition network instead of five convolutional layers as that used in Retinex-Net. In the restoration stage, we propose an adaptive gamma estimation sub-network and a reflectance refinement sub-network, which leads to better enhancement performance than the encoder-decoder structure used in Retinex-Net.

Network architecture
Retinex decomposition is our foundation of enhancement, the results of Retinex decomposition directly affect the results of enhancement. However, there is no ground-truth illumination and reflectance to constrain the network in our problem. Therefore, how to obtain good Retinex decomposition results is an challenge. Inspired by [12,33], we utilize paired low-light and normal images as the inputs of the decomposition network since the low-light image S L and normal-light image S N of the same scene have the same reflectance. They are fed into two decomposition networks and the network weights are shared between them. In this way, we can just give the low-light image in the testing stage.
We further improve the quality of Retinex decomposition with a pyramid network architecture. Effectiveness of multiscale information has been demonstrated in many tasks, such as object detection [34], image deraining [35], image superresolution [36][37][38] and neural style transfer [39]. This motivates us to apply the pyramid decomposition network for low-light image enhancement. As shown in Figure 3, the input image S goes through a series of down-sampling operations to get different scales and frequency bands {S 0 , S 1 , S 2 … S n }, where S k = (G * S k−1 ) ↓, G represents Guassian blur kernel and ↓ represents down sampling. S k−1 always has higher frequency than S k . In this work, we set n = 4, namely we utilize a pyramid network with four scales. Then these different scales and frequency bands are taken as input for their corresponding ResBlock network, which consists of eight sequential concatenated Conv(3×3)-ELU-Conv(3×3)-ELU units with skip connections. Finally, the upsampled output of each Resblock combines with the output of upper Resblock recursively to progressively reconstruct residuals of high-frequency. The output is a four channel image, where the first three channel constructs the reflectance component and the last channel is the illumination component. For more information, please refer to

Loss functions
We use the following compound loss function to train the Retinex decomposition network: where  D rec is the reconstruction loss,  D recm denotes the mutual reconstruction loss.  D con represents consistent loss of reflectance between low/normal image pairs, and  D s denotes smoothness loss for illumination component. r and s are weighting parameters. In this paper, we set m = 0.001 , r = 0.1 and s = 0.01 experimentally.
The reconstruction loss  D rec and mutual reconstruction  D recm be denoted as: where L L and L N are the decomposed illumination components for the inputs S L and S N . R L and R N are the decomposed reflectance components for S L and S N . ‖ ⋅ ‖ 1 is the 1 norm. The reflectance R L of low-light image and reflectance R N of normal-light image are assumed to be the same, therefore consistent loss  re f can be formulated as: Illumination is locally smooth while preserving image structures. Inspired by the work in [12], we adopt the following smooth loss function: where ∇ represents horizontal and vertical gradients. e ( l ∇ L ) regulates the smoothness level based on the gradient of reflectance, where areas with large reflectance gradients should permit discontinuous illumination. l is set to −10. Since the decomposition result is vital for the following restoration process, the training of the decomposition network should be robust and stable. At the beginning of training, the gradient increases sharply that would lead to instability, and outlier data would lead to oversensitive. The work in [40] demonstrates that L1 loss is less sensitive than L2 loss. Therefore, we adopt L1 loss in the decomposition stage.
The decomposition results are showed in Figure 4, a low-light image could be decomposed into the reflectance and illumination component properly, which is needed to be refined and adjusted in the following restoration stage for enhancement.

Image restoration
We observe that the decomposed reflectance for the low-light image still has some noise due to the reconstruction loss function. Therefore, we propose to refine the reflectance via a light weight sub-network. The decomposed illumination for S L is corrected via adaptive gamma correction, which are estimated via an encoder-decoder network. In the following, we give details for the two sub-networks.

Adaptive gamma correction
Gamma correction is a traditional strategy to manipulate illumination, where a fixed correction parameter is usually adopted for the whole image, namelỹ whereL L is the adjusted illumination. However, real images often have non-uniform illumination, while suitable illumination is important for the final enhanced image. Therefore, we propose a gamma estimation network to learn adaptive correction parameter for each pixel, which makes the illumination not only have normal illumination but also preserve structures. We adopt an encoder-decoder architecture [41]. As shown in Figure 3, there are three down-sampling convolution layers and three up-sampling convolution layers, which are all followed by ReLU activation. Skip connections are introduced from a down-sampling convolution layer to its  corresponding mirrored up-sampling convolution layer. Through the network, we obtain the gamma matrix G which is the same size as L L , and then refined illumination is obtained as follows: where for each pixel i, the corrected illumination LL i is the G i power of L L i . The illumination adjusted results are showed in Figure 5, the brightness has been improved greatly. To show more effectiveness of adaptive gamma correction than traditional gamma correction, we compare the adjustment results of the non-uniform illumination image. As shown in Figure 6, fixed gamma correction for whole image either over-expose the bright regions or under-expose the dark regions. With adaptive gamma correction, we can brighten the dark regions and keep the details of the bright regions.

Reflectance refinement
The decomposed reflectance has noise and colour distortion, therefore we utilize a simple network which consists four convolution layers to obtain the refined reflectanceR L . Theñ R L •L L is the final enhanced image, which has normal illumination, less noise, pleasing colours and fine details.

Loss functions
The overall loss function for the restoration network is also a compound loss function: where  E rec is the reconstruction loss,  E l represents the consistent loss for illumination, and  E s is the smooth loss for illumination. l and s are weighting parameters. We set l = 0.1, and s = 0.01 in our paper. The enhanced image computed bỹ R L •L L should be consistent with the normal-light image S N , which can be constrained by the following reconstruction loss: where ‖ ⋅ ‖ 2 is 2 norm. Meantime, the enhanced illumination should be consistent with the illumination of normal-light image: Similar to the smoothness loss in Retinex decomposition stage, the enhanced illumination can be constrained by the following smoothness loss: c is set to −10.
In the restoration stage, we not only focus on the robust performance but also on the convergence speed. Therefore, L2 loss and L1 loss are both utilized. L2 loss is helpful for improving the convergence speed. The experiments results indicate that our restoration network achieves high accuracy and fast convergence speed.
In our proposed method, we set the layer and channel numbers of the network experimentally to achieve a trade-off between the performance and the computing complexity. A shallower network, that is, with fewer layers, has low computing complexity but the performance is poor. With the increase of the layers, the performance is improved, and the computing complexity is also increased. Finally, we find that when the down-sampling operator is larger than 3 and the basic residual units in each ResBolck is larger than 8, the performance has little improvement but the computing complexity is becoming more and more larger. Therefore, we set the down-sampling layer number is 3 and the basic residual unit number is 8 in ResBlock of Retinex decomposition network. For the same reason, we set the layer number in adaptive gamma estimation network is 6 and the layer number in reflectance refinement number is 4.

EXPERIMENTS
In this section, we first present implementation and parameter settings of our method. Then the ablation study for multi-scale decomposition and adaptive gamma correction are demonstrated. Finally, we present experiments to evaluate the performance of our method. We demonstrate our advantage over state-of-the-art methods [5, 12 16, 21 42, 43] by qualitative and quantitative assessments.

Training details and datasets
We train our model on the training images of LOL dataset [12]. The Retinex decomposition and image restoration networks are trained separately. The training images are cropped to patches with size of 96×96. We train our decomposition net with Adam optimiser and restoration network with SGD optimiser, the batch size is set to 8, and the training epochs for the two subnetworks are both set to 500. Initial learning rate is 0.001 and is decreased by 0.75× every 100 epoch. Our network is trained on a computer with NVIDIA GeForce GTX 1080Ti GPU. Figure 7 presents the features learned from the decomposition network (the top row), the adaptive gamma estimation network (mid row), and the reflectance refinement network. From left to right, the more convolutional layers the input image goes through, the features become more and more specific to its final patterns. It can be observed that the convolutional layers in Retinex decomposition network not only focus on the detailed features but also the intensity variation. Meanwhile, in the adaptive gamma estimation network, the convolutional layers focus on extracting intensity variance in order to generate the adaptive gamma correction map. To get better reflectance map, sufficient detailed features should be learned. Therefore, the reflectance refinement network focuses on detailed features.
We test all methods on 15 images form LOL dataset [12], 10 images from LIME dataset [16], and 60 images from MEF dataset [44], and 24 images from VV dataset [45]. These datasets contains varieties of lighting conditions and include both indoor and outdoor photographs. In order to show our effectiveness and superiority, we do quantitative assessment and qualitative assessment on these photographs. In qualitative assessment, five metrics are adopted for quantitative comparison, which are PSNR, SSIM, the natural image quality evaluator (NIQE) [46], the blind image quality index (BIQI) [47], blind/referenceless image spatial quality evaluator (BRISQUE) [48]. Among these metrics, NIQE, BIQI, BRISQUE are no-reference image quality evaluators. A higher value in terms of PSNR, SSIM indicates better quality, while for NIQE, BIQI, and BRISUQE, the lower the better.

Ablation study for multi-scale decomposition
In Retinex decomposition stage, we apply a four-scale pyramid network. To verify the effectiveness of the pyramid decomposition network for the enhancement task, we compare with single scale based decomposition (namely the input image directly goes through a ResBlock to produce decomposed illumination and reflectance) and a pyramid network with two scales (namely the input image is decomposed into {S 0 , S 1 }).
The visual comparison results are presented in Figures 8  and 9. From the results, we can see that contrast increases   Table 1 also demonstrate that the four-scale pyramid network has better performance for lowlight image enhancement.

Ablation study for adaptive gamma correction
In restoration stage, the illumination is adjusted by adaptive gamma correction. In order to prove the effectiveness of the adaptive gamma correction, we compare enhancement results with traditional gamma correction. In traditional gamma correction experiment, we learn a fixed correction parameter for whole image instead of learning a gamma matrix in our method.
The visual comparison test results are presented in Figures 10  and 11. From the results, it can be observed that the adap- tive gamma correction can obtain better enhanced result than the traditional gamma correction. From the illumination component of enhanced results, the traditional gamma correction results would be over exposed in some areas, but more details could be observed in proposed method. These two strategies both can lighten dark areas while preserving the lightness order. However, the traditional gamma correction will lead to the loss of details in bright areas.

Comparison with state-of-the-arts
We compare with the following state-of-the-art methods to demonstrate the superiority of our method: the dehazing based method (Dong) [5], BIMEF [42], simultaneous reflection and illumination estimation (SRIE) [21], image enhancement by illumination estimation (LIME) [16], learning based joint enhancement and denoising method(JED) [43], and Retinex-Net [12]. The qualitative and quantitative analysis are performed to test the performance in this comparison.

Qualitative analysis
To perform the qualitative analysis of the comparison between state-of-the-art methods and proposed method, the visual enhancement results are shown in Figures 12-15, which include images with different lighting conditions. The visual comparison for low-light image enhancement focus on brightness, noise and contrast, so on. Figures 12 and 13 present the visual comparison results on two low-light images from MEF dataset. From the results, we    [5], enhancement result of BIMEF [42], enhancement result of SRIE [21]. (b) Enhancement result of LIME [16], enhancement result of JED [43], enhancement result of Retinex-Net [12], and proposed enhancement result.

FIGURE 13
Visual comparison with state-of-the-art low-light image enhancement methods on one image from MEF dataset. (a) Low-light image, enhancement result of Dong [5], enhancement result of BIMEF [42], enhancement result of SRIE [21]. (b) Enhancement result of LIME [16], enhancement result of JED [43], enhancement result of Retinex-Net [12], and proposed enhancement result.

FIGURE 14
Visual comparison with state-of-the-art low-light image enhancement methods on one image from LOL dataset. (a) Low-light image, enhancement result of Dong [5], enhancement result of BIMEF [42], enhancement result of SRIE [21]. (b) Enhancement result of LIME [16], enhancement result of JED [43], enhancement result of Retinex-Net [12], and proposed enhancement result.

FIGURE 15
Visual comparison with state-of-the-art low-light image enhancement methods on one image from LOL dataset. (a) Low-light image, enhancement result of Dong [5], enhancement result of BIMEF [42], enhancement result of SRIE [21]. (b) Enhancement result of LIME [16], enhancement result of JED [43], enhancement result of Retinex-Net [12], and proposed enhancement result.
can see that some details in dark areas are still not revealed in enhancement results of the compared methods. BIMEF, SRIE and JED have low contrast, while Dong, BIMEF, LIME, Retinex-Net have obvious noise. The results of Retinex-Net also have colour distortion. In contrast, our method is effective in details preservation and noise removal. Figures 14 and 15 present the visual comparison results on two low-light images from LOL dataset which contains some extremely low light images. For these images, Dong, LIME and Retinex-Net could improve the brightness to some extent at the cost of leading to great noise. Especially for Dong, the light control parameter is set larger, the more noise will be introduced. For BIMEF, SRIE and JED, the enhanced results still exist many dark areas. In contrast, our method for these extremely low-light images produces visually pleasing results with proper brightness and less noise.
It can be observed that the results of Retinex-Net still contain much noise. Although it utilizes BM3D to remove the noise in the decomposed reflectance component, it cannot remove the noise clearly. The main reason is that BM3D is designed for removing Gaussian noise with fixed noise level. However, the noise in the reflectance component is various and more complex than Gaussian noise. For example, the noise level in the dark regions is much higher than that in the bright ones. Utilizing BM3D with a uniform denoising ability over the whole reflectance map is no longer effective. In contrast, the results   3 Quantitative comparison on LIME [16], MEF datasets [44] and VV dataset [45] in terms of NIQE [46] BIQI [47] and BRISQUE [49]. The best results are highlighted and the next bset results are underlined  of proposed method exist less noise. It is contributed to the reflectance refinement network, which could remove the amplified noise in dark regions.

Quantitative analysis
To evaluate the quality of enhanced results on LOL dataset in numerical comparison between proposed method and state-ofthe-art methods, we adopt NIQE [46], BIQI [47], BRISQUE [48] and PSNR, SSIM to calculate the score. Table 2 illustrates the numerical comparison results on LOL dataset. It can be seen that our method performs better than most of the other methods. The reasons that we obtain better results than most of others is twofold: (1) We can effectively decompose the low-light image through the multi-scale decomposition network. (2) Our method can learn the gamma matrix G to adjust the illumination component adaptively. Sicne there is no ground-truth images in LIME dataset [16], MEF dataset [44] and VV dataset [45], we adopt the NIQE [46], BIQI [47] and BRISQUE [48] as objective assess-ment in Table 3. Our method still achieves the better performance than others in most datasets. We summarise the average points obtained by the three datasets mentioned above in terms of NIQE, BIQI and BRISQUE in Figure 16. As can be seen, results generated by our method are more effective in average.

CONCLUSION
This paper proposes a low-light enhancement method, which consists of Retinex decomposition and image restoration networks. We adopt a multi-scale network to obtain the decomposition result. Then, the decomposed reflectance is refined in the restoration network. The decomposed illumination is adjusted via a pixel-wise gamma correction matrix. Experiments show that our method produces more visually pleasing and better quantitative results.
In the future, we would like to extend the proposed method for low light video enhancement by introducing temporal correlation exploration module in both decomposition and restoration networks, and adding temporal consistent regularisation to avoid flicking artifacts.