Underwater image restoration: A state-of-the-art review

Underwater imaging is one of the hot areas of research and is receiving considerable attention from the research community due to the challenges involved. Underwater images are prone to various distortions like poor contrast and colour deviation. Light scattering and light absorption in water medium are the main reasons for degradation in subaquatic images. Scattering of visible-light energy reduces the sharpness of image whereas varying degrees of light attenuation travelling in water results in the colour change. Restoration of distorted underwater images is an ill-posed and challenging problem. Various techniques and methodologies are being used to process and restore underwater images. In this study, we present a state-of-the-art review of various conventional and computer vision-based algorithms and techniques, developed so far, to present a clearer view of the methods used for underwater image restoration. We discuss various conditions for which the schemes have been developed as well as highlight the quality assessment methods used to evaluate their performance. We compare various state-of-art schemes based on various subjective and objective indices and discuss future research directions in the ﬁeld of underwater image restoration.


INTRODUCTION
In ocean engineering, obtaining undistorted images in the water medium is a major issue. Oceans have a large amount of means of energy, which plays an essential role in the prolongation of existence on earth, and aquatic survey involves high technological activities [1,2]. Fog and haze removal are mandatory for applications that are instantaneous, for example, transportation that is video-directed [3][4][5], environment monitoring [6][7][8], for surveying instantaneous remote detection pictures [9][10][11][12], and the auto-pilot supported structures [13][14][15]. In addition to this, it is quite challenging to precisely recover remotely sensed objects covered by thick clouds and shadows [16]. Also, the processing of digital pictures is challenging and increasing speedily. Mist in the picture seems to be assorted pels in remote detection [17]. Furthermore, blurriness leads to the deterioration of ocular information [18]. The quality of the underwater image is essential for analysing aquatic life, studying biological or geological abode [19]. A lot of focus has been attracted by ocular technology as it contains ample information [20]. Chemical composition and physical characteristics of the environment underwater immensely affect the quality of pictures captured in such an This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology environment, leading to problems that are not encountered in imaging above water [21]. Moreover, the density of the water is about 800 times greater than the atmosphere above water. Thus, the portion of visible light which falls on water partly bounces back and partly goes into the water [22]. The portion of energy going into the water begins to decrease as it propagates in the water. Molecules of water absorb the portion of the energy of light and that is why the subaquatic images appear more unlit with an increase in depth. Particles being suspended in water reduce maximum energy and deviates light energy from its path before it reaches the camera, resulting in a blur, reduced contrast, and mist [23]. Also, because of absorption, scene radiance reduces with the increase in distance of the desired object from the camera [24]. Depending upon different wavelengths of light, colours drop gradually as the visible-light energy propagates deeper. At first, the longest wavelength, that is, red drops at 3 m of depth, and then the orange wavelength is lost at the depth of 5 m. After that, the yellow wavelength disappears at about 10 m, and eventually, the green and purple wavelengths disappear [25]. Blue colour travels the longest in the water due to its shortest wavelength. This is what makes the underwater images dominated only by blue colour [26]. Figure 1 shows how colour drops off as light travels deeper underwater. Also, molecules of water reduce some percent of visible-light energy which causes haziness of an image [27]. As visible-light coming from the image travels toward the aperture of the camera, part of this visiblelight energy encounters the particles suspended in water. This results in the sucking up and scattering of light energy by these particles as illustrated by Figure 1. Artificial illumination is frequently used but it is also influenced by scattering and attenuation [28]. Simultaneously, uneven illumination is caused, leading to spots that are bright at the centre of the subaquatic image [29].
Three types of light radiations are received by the camera: Energy bounced back from the scene directly (direct transmission), energy that comes across tiny particles and gets scattered before reaching the aperture of image-capturing device (forward-scattering), and energy from atmospheric light that is bounced by the particles present in water (backgroundscattering) [30].
An undersea image is shown as a linear combination of direct-transmission, forward-scattering, and backscattering components [31][32][33]: where (l,m) denotes the coordinates of picture elements; E t (l,m) denotes the overall energy coming toward the aperture of the camera.
As the distance of the undersea scene and camera aperture is relatively small, the forward-scattered energy can be neglected and just the rest of the two components [34,19] are considered.
Given all intricacies involved regarding the formation of the underwater image and the various underwater distortions that creep into the process, the underwater image restoration is a tedious task. In this study, we mainly focus on image restoration algorithms. Multiple contributions to the study of various undersea image restoration algorithms are made and are presented as follows.
We divide underwater image restoration techniques into two main categories: Hardware-driven techniques and softwaredriven techniques (prior-driven image restoration methods and network-based image restoration methods).
We show the comparisons based on some representative techniques from prior-driven, which provides an assessment based on different quality metrics.
We also discuss the issues in different underwater image restoration algorithms leading to new research directions.
The remaining portion of the study is as follows. In Section 2, we discuss the concept of image restoration. Section 3 discusses various hardware-based restoration methods. Section 4 discusses various software-based restoration methods.
E d (l,m) denotes direct-scattering component, E b (l,m) denotes backscattering component, and E f denotes forward-scattering component.
Section 5 shows a comparison of various subaquatic picturerecovering techniques through experiments. Section 6 discusses advantages and disadvantages of current picture-restoration techniques and Section 7 concludes the discussion and puts forward the future directions.

IMAGE RESTORATION
The methods of image restoration are based on degradation principle unlike image-enhancement methods. The methods of image enhancement do not take into consideration the relationship between the depth and deterioration of images. Such kinds of techniques enhance the visual effects but do not reflect the true colour of an image correctly. The enhancement technique corrects the sharpness and the visual quality of the picture but does not take away the full haze [6]. Some image-enhancement methods used for enhancing underwater images are onescan shadow compensation and visual enhancement of colour images [35], underwater image enhancement via extended multiscale retinex [36], fusion-based underwater image enhancement [37], automatic system for improving colour and contrast using adaptive histogram [38], subaquatic image enhancement using integrated-colour model [39] and subaquatic image enhancement using hue-preserving approach [40], Rayleigh distribution (RD)-based enhancement [41] and relative-global-histogramstretching (RGHS) [42]. The above-mentioned techniques overenhance pictures because these methods do not depend on prior data of subaquatic conditions [43]. The representation popularly used for misty picture in computer vision is [44][45][46][47]: where I denotes the input intensity at point x, J denotes sceneradiance at point x, A denotes global ambient-light energy and t denotes medium transmission map (TM) describing part of the energy that reaches the camera aperture without getting scattered. The above-mentioned model is popularly called the image formation model (IFM) and is given by Duntley et al. [48]. We use Duntley's representation for subaquatic image recovery. In the above equation, both terms give the theoretical idea of images with blur and haze [49][50][51][52]. The main aim of dehazing is to compute the value of J which can be calculated after calculating the two important parameters, that is, the global ambient light and TP. Therefore, these unknowns need to be estimated for the restoration of an image captured underwater. Based on the available studies, the methods of image restoration are classified into two broad areas: Hardware-driven and software-driven approaches (prior-driven and network-driven techniques).

HARDWARE-BASED APPROACHES
These approaches use hardware equipment to recover an underwater image. Some hardware approaches include components like sensors, lasers, polarisers, remotely controlled automobiles, stereo imaging, aqua-tripods, and methods based on the laser.
Polarisers can diminish the effect of backscattering to some degree and can be implemented by utilising polarisation cameras or utilising a light energy source which is polarised to capture images [53]. Not to have the effect of backscattering, laser imaging makes use of an image-capturing system that can shut the gate of flash at particular instant. Macro-particles, subaquatic snow, subaquatic organisms can be detected by waterproof sensors to eliminate reflections due to the suspended particles. Aqua-tripod is positioned on the surface of the sea to click images properly. Some of the main hardware methods are described as follows:

Remotely controlled automobiles
It is used for the undersea survey and has been presented by Zischen et al. [54]. It has an artificial lighting system, thrusters, and video cameras. This equipment is positioned at different places, facing different water environments to capture samples of these environments. A computer simulates the reflected visible-light energy. Before the above step, the Weiner filtre is applied to the images which takes into consideration the analysis of subaquatic conditions to recover the blurry pictures. When the technique is implemented with an appropriate framework, recovering distorted images is expected to be practicable.

Polarisers
Schechner and Karpel [55] had put forward the inversion model of the image which is based on pictures captured at various angles using a polariser. The maximum distortion in subaquatic images is because of light being polarised partially. To address this issue, polarisers are used at multiple angles of polarisation to capture images of the targeted object. These pictures are then processed by utilising a technique to upturn the imageformation process to enhance visibility. A scene-distance map is calculated using a collection of images from the polarisers. The recovery method without scanning is given by Schechner and Treibitz [56]. Utilising artificial illumination to analyse the structure of 3-D scenes, this approach uses some hardware and polarised multi-coloured lighting. Polariser outputs scene with two frames, following a restoration algorithm for acquisition [53]. Further understanding leads to the conclusion that the particular approach is restricted within some range because of noise.

Range-gated imaging
This approach employs the light source in addition to the camera and the desired object lies behind the complex environment.
It is the type of image processing in which the camera opens the gate precisely at a predefined short instant, to reduce the effect of backscattering. After projecting light toward the targeted object by the illuminating source, the camera shuts the gate as light travels. When light energy is bounced back from the desired object, the gate of the camera opens. Then, the gate shuts immediately after that. Tan et al. [57] found a design comprising of two steps: With upgraded hardware and optimisation techniques, for subaquatic robotic automobile used in undersea surveying and repairing, to enhance undersea visibility [53]. Upgraded hardware consists of an efficient range-gating for imaging known as tailgating. System optimisation uses CLAHE, to make more improvements in the pictures. This approach is the same as the one presented by Hitam et al. [58]. Li et al. [59] put forward different range-gated imaging techniques, by utilising a pulsed laser for picturing and using fast-gated camera [53].

Stereo imaging
This method uses a bi-camera configuration to click pictures of the same spot at various angles to obtain a distance map of the object. Roser et al. [60] uses a technique of stereo imaging for automatic undersea automobiles to restore subaquatic pictures in naturally illuminated murky mediums by predicting coefficients of visibility. This technique involves two methods to improve images. The initial step involves restoring scenes in regions with poor visibility. For the recovery of data, the Bayesian approach is employed to produce a prior. The prior then produces a rough 3-D scene map. The physical-light absorption model and rough 3-D scene map are employed to predict ambient lighting and visibility coefficient. To obtain a depth map that preserves edges, image matting is carried out refining the coarse map and deriving information that depends on appeared intensities, to obtain the so-called 3-D map [53]. The resulting data is then processed using another enhancement step where spaces are filled. The two-step technique is then performed again to generate enhanced stereo-map. This map is then improved using the preserving gap interpolation technique. Eventually, the resulting map is then used in combination with the map with interpolated gaps to restore an image.

SOFTWARE-BASED APPROACHES
These algorithms employ effective methods to restore subaquatic images. Broad classification of software-driven techniques used for subaquatic image recovery are priorbased methods and network-based methods. Software-based approaches perform well as compared to hardware-based approaches. Software approaches consume less time in computation, have feasible designs, and are economical (cost-effective).

Prior-based methods
Absorption of light energy and its scattering due to suspended particles mainly contribute to the degradation of underwater images. Considering discriminatory light attenuation and hazy effect, various prior-driven methods are used for undersea image restoration [21]. These are: Dark channel prior (DCP) [61,62]; underwater DCP (UDCP) and transmission estimation in underwater single images (TEoUI; UDCP-TEoUI) [63], UDCP [64]; maximum intensity prior (MIP) and initial results in underwater single image dehazing (IUID; MIP-IUID) [65]; red channel prior (RCP) [66]; blurriness prior (BP); underwater light attenuation prior (ULAP) [67]; low-complexity UDCP [68] and so forth. These priors first calculate atmospheric light and medium TM. Then the calculated values are substituted into the IFM representation used for image recovery [21]. Some prior data has been proposed in past years to compute the medium TM. These methods depend on prior information. The transmission reduces to zero gradually as the scene depth increases and the contribution of atmospheric light becomes significant. So, pixels in an image with maximum scene-depth are considered for computation of atmospheric-light energy. Selecting these pixels with accuracy, two rules are proposed. The pixel in an image with the highest brightness can be considered as atmospheric light is proposed by Chiang and Chen (2012). Galdran et al. and Li and Guo, in 2015, proposed that the intensity of reference pixels in the red channel is lower than B and G channels. However, the reference pixels are selected based on colour information by both the rules. Depending on colour information, the prior-based methods underrate the transmission of blue or green colour.

Dark channel prior (DCP)
DCP is proffered by Kaiming He et al. [69]. It is a well-known dehazing technique used for haze removal of degraded outdoor images. As hazy images are similar to the images taken underwater, this technique is also used for subaquatic image dehazing. The said method uses the fact that in one colour channel of an image without haze, some of the pixels have minimum intensities. If DCP is directly used for dehazing of undersea images, background-light energy can be calculated by following the below steps: In dark channel of a picture, select topmost 0.1% of the brightest pixels and out of these pixels, choose the highest intensity pixels [68]. By substituting the brightest value of a pixel as atmospheric-light energy in Equation (2), the TM can be estimated from Equation (6). Calculating dark channel from Equation ( (2) and by applying minimum operators on both sides of Equation (2), denoting minimum value of scene radiance J as J dark is shown as the below: where C, ϵ, r, g, b and denote particular channels, π(x) denotes a local patch centred at x on an image. Using minimum operators on both sides of Equation (2) and normalising w.r.t. A c we get: where A c denotes atmospheric light in a particular channel. Using Equations (3) and (4) in above, we get: Since t(x) is considered constant in the patch π(x), some block artefacts and halos are obtained. This is because the TM is not constant. Therefore, to refine the TM soft matting or filtering can be used [69,68].
Chao and Wang [70] applied DCP directly to restore the underwater images and remove the haze present in them. However, the recovered images appear less natural as they suffer from additional colour deviation. Yang et al. [68] in 2011 presented an underwater image-restoration technique based on DCP to decrease computation and execution time. They used median filtering instead of soft matting for refinement of TM which contains the image-depth information. This technique also proposes a way out for the correction of colour to correct the brightness of recovered images. However, it is suitable to only colour-rich images and not for recovering undersea images with a dim scene or uneven colour spots [21]. Chen and Chiang [19] presented image dehazing and wavelength compensation to reduce the effect of artificial illumination, for compensation of three-colour channels with varying absorption characteristics, and dehaze the image [21]. Lu and Serikawa [71] used a combination of fast joint trigonometric filtering (JTF) and DCP, for the compensation of discrepancy in attenuation in the light propagation path. This filtering technique is used for medium-transmission refinement that is calculated by the conventional DCP for removing non-uniform colour-cast and correct contrast and preserving information in edges in images. Zhao et al. [72], assessed subaquatic ocular properties by analysing the colour of the background area in subaquatic or undersea images. The coefficients of attenuation of three colour channels namely, RGB are calculated from the relation between background light (BL) and ocular properties that are inherent. The conventional DCP algorithm was considered for estimation of the TM of the R channel, and after that TM of GB colour channels were estimated from the R channel by using exponential-dependence relation between attenuation coefficients. Peng et al. [73] selected top 0.1% of pixels with the highest intensity in dark channel and a calculated average of selected pixel intensities which was then considered as atmospheric light. DCP gets influenced by the attenuation of light energy, which is selective in conditions as in water, which resulted in the development of various DCP algorithms specific to underwater environments.
Underwater DCP-driven image restoration: When light energy travels in water it undergoes attenuation. As we know, the wavelength of red colour is much larger than other colours comprising the visible-light spectrum, so red colour drops faster than the other colours in light energy. Therefore, R channel of subaquatic image is usually considered as dark channel. However, to nullify the effect of red colour, UDCP [63], by Drews et al., calculates dark channel of an underwater picture by consider-ing blue and green channels only. To estimate rate of scattering, Wen et al. [74] proposed an optical model for undersea images which resembles UDCP. UDCP estimates the TM more accurately than traditional DCP, however, the images restored by UDCP are less satisfactory. The reason for the unsatisfactory results is that these techniques do not consider the imaging characteristics of R and GB wavelengths. Lu et al. [75] proposed that the blue channel can also have the lowest-pixel value occasionally in turbid water. The R channel among the RGB channels does not always have the minimum pixel value as in the case of turbid water. Thus, they proposed the method of using a dual dark channel comprising of B and R channels to calculate the rough TM and removed halos in the estimated medium transmission by utilising a weighted median filtre. Lu et al. [76] found the super-resolution of an underwater image by taking a scattered image with high resolution (HR) and descattered images. Post this, the rule of convex fusion is applied for restoring the final HR picture. Guojia Hou et al. [77] proposed a new curvature and total variation-based method to denoise and dehaze pictures simultaneously.

Bright channel prior (BCP)
The medium transmission is estimated via the bright channel. Colour channels are first separated into three channels as follows [78]: where t denotes the medium transmission, I r , I g , and I b denotes three colour channel pictures of the distorted undersea image, respectively. J r , J g , and J b are the scene radiances of three colour channels [78]. A r , A g , and A b are atmospheric lights of R, G, and B pictures of the deteriorated undersea image, respectively [78]. The images or scene radiances without degradation (J r ,1−J g ,1−J b ) are shown as J n and the degraded image is shown as I n . A n denotes the atmospheric light in the degraded image: Using three colour channels, Equation (10) is deformed as c {r n , g n , b n } represents the three colour channels Bright channel is defined as where Ω(x) is a local patch of an image. The bright channel intensity of undersea pictures with murky water areas and far away scenes are approximately 1 [78]: Taking maximum values on both sides of Equation (11), t(x) can be calculated using Equation (15): Image with the highest colour difference is defined as follows: , 0 (16) where B g -subr is a colour image with the highest difference, x is the location of pixel, C max is the channel with the greatest intensity in three channels, C mid has a medium intensity in three channels, and C min has a minimum intensity in three channels [76].
The transmission obtained is smaller than the actual transmission as in BCP, the bright channel of the image without degradation is assumed approximately equal to 1. Therefore, to increase the stability, the calculated bright channel of a picture is corrected by using the maximum colour difference image [78]. The equation used is (17) where light correct (x) represents the corrected bright channel picture, light (x) = max y∈Ω(x) (max c∈{ r n , g n , b n } I nc (Y )) represents the non-rectified bright channel picture of a deteriorated undersea picture, and λ is a proportional coefficient.
To compute global atmospheric-light energy, a bright channel of a degraded picture is used. The variance image of the Grayscale version of the original image is produced. Then, 1% of pixels with the lowest intensity in the bright channel are picked. From these pixels, a pixel with the minimum value in the variance picture is chosen as atmospheric-light energy. Finally, image without haze is recovered using Equation (10) as all unknowns are calculated. The whole process is described in Figure 2.

Maximum intensity prior (MIP)
Exploiting the strong variation in attenuation between GB and R channels of subaquatic pictures, Carlevaris-Bianco et al. [65] found a method wherein a great difference between the absorption of RGB wavelengths was discovered. Based on this prior data, computation of depth of the scene is carried out. The proposed algorithm is thus called MIP. The variation between two extreme values of R-channel intensity and BG-channel inten- sities is used for medium-transmission estimation. The experiment proved that MIP can compute rough distance maps of underwater pictures [21]. The proposed algorithm can be utilised for atmospheric-light estimation as well. Wen et al. [74] proposed the improved version of MIP (i.e. new optical model (NOM)) to compute the atmospheric light. The assumption used was that the intensity of the R wavelength is comparatively smaller than the intensity of the green-blue wavelengths in the area in the background. Zhao et al. [72] proposed a method for estimation of BL using MIP and DCP. Upper 0.1% of the highly bright pels present in dark channel were selected first, then pel with the highest variation between blue-green wavelengths or GR wavelengths was selected [21]. Li et al. [79] estimated the atmospheric light by selecting the pixels with the highest difference. Using a mixed method in [80,81], they picked only a single flat area of the background using quad-tree, then selected upper 0.1% highly bright pels in the dark channel from chosen region.
Finally, one of these pels with the highest variation in RB wavelengths in the original picture is selected as a global ambient light [21].

Underwater light attenuation prior (ULAP
Song et al. [67] found an effective algorithm for computation of scene depth which uses a concept of ULAP. The proposed method considers that variation between extreme values of R and BG intensity in a single pixel of image undersea highly relies upon variation of scene depth. The proposed model is used to estimate the TM and atmospheric light. It is a linear representation. Assuming that the depth of scene extends with a greater value of variation between extreme values of B and G wavelengths and value of R wavelength, linear representation of scene-depth computation is trained [67]. Finally, the nondegraded image is restored after calculating background light, scene depth, and TM.

Blurriness prior (BP)
Peng et al. [73] found BP which stated that the deeper scenes are more blurred. Deeper the scene depth, more blurred is the underwater image. Based on this assumption, scene-depth estimation and complete image restoration is carried out. Peng and Cosman [82] proposed an improved version of BP. The proposed technique is called image blurring and light absorption (IBLA). It is used for precise background light-energy computation and scene-depth estimation. The subaquatic images are restored in different types of complex environments.
First, the initial picture blurred map is calculated and then the rough blurriness map is estimated using the max filtre. The rough blurriness map is enhanced by filling spaces due to flat areas in objects using morphologic regeneration [83], and soft matting [84] or guided filtering [85] is used to refine a blurred map.
Second, the BL is estimated using variance and blurriness of an image. The BL is computed for the three colour channels between the brightest and the darkest BL candidates [82]. If we obtain a high percentage, it implies the underwater image was captured using adequate light, and BL computed is considered bright. If the underwater image is captured in inadequate light, then the BL candidate is considered dark. In between the two maximums, computation of BL is carried out by a weighted combination of brightest plus darkest BL candidates.
Depth estimation is then carried out using the combination of three computation methods for depth. The map first, of scene depth, is computed from the R-channel map, considering the assumption that points of scene preserving a large amount of R wavelength are near to aperture of a camera. The second estimation is carried out using the difference between the R wavelength and the highest values of GB wavelengths. If the difference comes about to be larger, it implies the scene is proximal to the aperture of a camera [65]. The third estimation is done using the concept of image blurriness and light absorption. The estimated distance map is then corrected by soft matting [84] or guided filtering [85].
Finally, medium TM estimation is done using the concept of the depth map. Therefore, the calculation of the distance of a picture from the aperture of the camera should be accurate. By the end, image without degradation is recovered by restoring the scene radiance as the unknown parameters have been calculated. The process is described by Figure 3

Network-based methods
The approaches that are network-driven form the framework for many machine-learning algorithms. It is used for the pro- cessing of data which is complex and uses trained datasets [53]. Subaquatic imaging has given much attention to the correction of colour and removal of mist since long [86]. Here, we have discussed some of the data-driven techniques which are learning-based picture-recovery techniques rising rapidly with speedy increase in AI (artificial intelligence) [87]. The prior-based methods discussed above make a huge estimation error if the prior assumptions are not accurate. To address these issues, a technique relying on deep learning has been adopted for the calculation of medium-transmission and global ambient-light energy. This technique employs deeplearning models that are trained with synthetic dataset to compute medium transmission. The prior-based approaches assume that the transmission of three colour channels is constant which results in halo artefacts and the need for refinement of TM.
The performance of data-driven algorithms highly depends on the training dataset quality [88]. With the fast advancement of deep learning for image recovery, considerable change has occurred from the selection of parameters based on synthetic representations for optimisation to automated training representations, which uses reference data for the extraction of few essential feature vectors [21].

Convolutional neural networks (CNNs)
This category of networks is made up of several stacked layers. They consist of the input layer, output layer plus some layers in between output and input which pass the half-processed data to the other layers connected in succession as described in Figure 4. Such a technique consumes less computational time [53]. Anwar et al. [89] found an image-enhancement model based on CNN namely, underwater CNN (UWCNN). By using an automated network-driven learning process using databases of artificial undersea pictures, this model can regenerate undersea pictures clearly [53]. A white balance algorithm was used to improve the overall quality of degraded submarine images  [90]. After white balancing, conventional CNN was used for atmospheric-light energy calculation and medium TM computation and finally subaquatic image was restored. Eigen et al. [91] and Cao et al. [92] proposed a multiscale deepneural network consisting of stacked rough CNN models and improved network to calculate the ambient-light energy and map of depth [21]. This approach produced a better quality of restored underwater image than the other prevailing restoration algorithms which are based on IFM. Barbosa et al. [93] addressed the issue that an end-to-end model may be unsuccessful in improving the ocular quality of undersea pictures not having sufficient ground truth of scene radiation [21]. They presented a method built on CNN that utilises a collection of metrics for quality evaluation of images to direct the process of learning for restoration without using ground truth data. Hou et al. [94] put forward the data and prior knowledge that were then clustered to observe the distribution in the image to reduce colour distortion. CNN networks meant for recovering an image, compute BLs, or depth maps of a scene via feature learning. The execution relies on the design of a network and training data. Because of the synthetic data and imperfections in deep-learning architecture, these trained algorithms respond to limited classes of underwater images. The deep-learning method consumes more time compared to the physical and non-physical models under the same environment of restoration.

Generative adversarial network (GAN)
It is instructed for changing pictures from any random zone K to another random zone Z, to produce bi-dataset [53]. Figure 5 shows the GAN architecture. When the two zones are selected as a collection of degraded plus non-degraded undersea images, a picture captured undersea while maintaining ground truth can be computed using bi-datasets generated via GAN [53]. Fabbri et al. [95] evolved GANs through technique meant for producing datasets for recovery of the picture that can be applied to any optically-directed underwater automation. The above-mentioned technique produces visually pleasing results and is more accurate for a diver-spoofing algorithm. Li et al. [96] found Water-GAN, which is a modified version of GAN used for recovering monocular images. The technique produces a pipeline architecture having Water-GAN as the initial component, having a map of depth and in-air pictures as input and produces corresponding artificial undersea pictures as output [53]. The collection of datasets with corresponding data of depth, inair colour, and the artificial subaquatic colour is used as actual ground truth for colour and depth to allow training of network meant for improvisation of colour. Finally, for recovering an image, a colour-correction network is put forward that improves the colour and uses underdone untagged subaquatic pictures as inputs and recovered pictures as outputs. The system for improvisation of colour consists of two components: Depthcomputation and image-recovering networks. Network for estimation of depth uses a rough relative-scene depth map which is reproduced from the down-sampled synthetic subaquatic picture. The subaquatic picture and the computed relative-scene depth map are used as inputs to the network for colour correction and recovery of the subaquatic image.

Some other techniques
According to undersea image formation representation, underwater picture can be shown as a combination of scattering and direct components. Xiaopeng Liu et al. [97] proposed sparse non-negative matrix factorisation (SNMF) to eliminate the effect of scattering on subaquatic image, which detaches direct-light energy from scattering-light energy. Using this technique, the input subaquatic picture can be splitted into parts that are additive namely, direct component plus scattering component. Scattering part has relatively less sparseness to acquire the recovered subaquatic picture. Moreover, it is simple and easy to implement. SNMF then can be used to separate the reflection component and scattering components. After factorisation, the restored underwater image can be obtained. Xinghua Li et al. [98] proposed a non-negative matrix factorisation (NMF) and error correction technique (S-NMF-EC). The first step is to acquire subaquatic picture without cloud contamination from reference picture and some low-resolution pictures. Second step is to remove veil of cloud in contaminated subaquatic pictures using fused reference picture that is free of cloud contamination built on NMF. Lastly, the final output is enhanced using error correction.

IMAGE QUALITY ASSESSMENT (IQA)
IQA for underwater and hazy imaging is a tedious task as ground truth unlike various image processing areas is not available. In this study, we use assessment metrics for picture/video quality evaluation as defined in [99][100][101]102]. Both quantitative and qualitative approaches are used to compare many stateof-the-art techniques and, in addition, five assessment metrics which perform well are used for objective evaluation [103]. Picture quality is often influenced by ocular execution of apparatus used for picturing by the din of instruments, prevailing conditions [104], and operations on an image. Also, as previously mentioned, absorption of light energy leads to loss in colour, forward-scattering leads to blurring and backwardscattering leads to foggy appearance of picture [105].To gauge the picture quality level, IQA can be used which is classified into objective quantitative assessment (OQA), and subjective qualitative assessment (SQA) [21].
Subjective evaluation rates the visual quality of an image. It highly depends on the human visual system (HVS). This quality assessment method produces a collection of data, which is then rated by some observers. Representation of natural pictures incorporate attributes of the HVS [106][107][108].
The objective evaluation uses a numerical representation that depends on the HVS to compute a quality index. If the models used are not erroneous then objective evaluation is more efficient than subjective evaluation. The objective evaluation uses three types of evaluations: The non-reference (NR) methods, full-reference (FR), and reduced-reference (RR). FR-and RRquality assessment methods use or partly use an undistorted and non-degraded image as a reference. As the underwater environment is quite complicated, the image of high quality cannot be acquired for reference. In such environment, objects on the land can be taken in a subaquatic environment. Furthermore, because of the complex conditions in subaquatic scenes, assessment metrics for subaquatic pictures are quite little. For evaluating the execution of various algorithms for picture restoration, we select several NR metrics for underwater images taking into account details such as the amount of information, resemblance to the original non-degraded image, sharpness plus global contrast, saturation, and brightness.
Entropy represents the unpredictability of data. It represents the plenteousness of information when used for the image. If the contrast in an image is even, it has higher entropy and the image is visually pleasing. If an image has poor contrast, it has less unpredictability. Moreover, it looks hazy.
Natural image quality evaluator (NIQE) [109] has been developed as per the sensitivity of sight of humans for areas with great contrast in pictures. This is based on multivariate Gaussian (MVG) for developing a representation of fragile regions, in which greater the values of these variables are, greater the picture quality [21]. If an image has a smaller value of NIQE, it has a bad visual form.
Blind/referenceless image spatial quality evaluator (BRISQUE) [110] computes the originality of an image which depends on calculated diversions from a natural picture representation. This model relies on statistics of natural views. This evaluator computes a reduction in picture originality which happens due to aberration. It ranges from 0 to 100 and its greater value implies the worst kind of picture. Yang and Sowmya [111] found the relation between colour, the sharpness of an image, and the subjective image quality perception. They put forward a method

Quality assessment based upon ocular parameters of prior-driven techniques
The ocular artefacts lead to perversions that are non-linear and may adversely influence vision-based functions carried out undersea for scientific research tasks like spooring, observing subaquatic scenes and categorising pictures [113,114].

Comparison of BL estimation algorithms
BL estimation decides the visual effect and colour tone of recovered images. Several methods used for computing TMs also rely on the estimated BL to a great extent. Therefore, underwater image recovery needs to compare different models for BL estimation. The following section compares the performance of various algorithms for BL estimation.
For comparison of BL-energy computation techniques built on various priors, we choose a subaquatic image, as shown in Figure 6A(a). Ground truth, BL of the test picture shown in Figure 6A(b), was obtained from some 10-15 person's comment built on the rule for choosing distant spot from camera aperture and light energy for illumination of background [21,115].
Figures 6B, C and D indicate the estimation of BL using various prior based algorithms. Out of them, Figures 6B(a) to (c) show result of DCP techniques, Figure 6B(d) shows result of UDCP. Figure 6B shows the estimation of BL using dark channel and UDCP. Some studies [62,70,68] reveal that BL estimation algorithms based on DCP do not blindly choose the bright- est pixel as the BL, however, it avoids the ocular picturing characteristics of the subaquatic environment having considerable variation between R wavelength and BG wavelengths, resulting in non-success of an algorithm for BL estimation using DCP. The execution of UDCP in Figure 6B(d) is quite close to DCP. Figures 6C(a) to (c) show result of MIP. Considering the maximum variance in the R channel and BG channels in the background area, the MIP result is just like ground truth BL. Figure 6D(a) shows RCP BL computation performance. Figure 6D(b) shows result of IBLA. Figure 6D(c) shows the ULAPbased result for BL estimation. Figure 6D shows performance of some other prior-driven techniques like RCP, IBLA and ULAP. R-channel prior takes into consideration the dark area in R channel as atmospheric light; hence, the calculation of BL is precise as indicated in Figure 6D(a). The ULAP built on the concept that difference in BG wavelengths and R wavelength is greatly related to the depth of the scene, selects faraway spots of original picture as BL. Result indicated in Figure 6D(c) is almost same as the ground truth.
To carry out a quantitative assessment of the results of BL estimation built upon various prior-driven algorithms, BL of about 300 pictures captured underwater were calculated using   Figure 7 compares the accuracy of techniques used for the computation of BL based on the various prior-driven methods.

Comparisons of TM estimation algorithms
On comparing the performance of different prior-based TM estimation algorithms, the conclusion drawn is that the nearer an image is to the camera, the greater will be the TM value, and appears whitish in the TM. In opposite to that, the distant the image is, darker its TM appears. This criterion is used to compute the accuracy of TM estimation techniques using various   [85] can be used to refine it. Figure 8A shows the original image used for calculating TM. Figure 8B(a) to (c) show TM computation based on dark channel algorithm does not perform well. The issue arises due to the inaccurate computation of BL. The DCP method selects the brightest spot as the BL, which can be a light spot or a bright object. Considering this, BL would lead to an erroneous depth map. Figure 8C(a) shows the result of UDCP method. It produces inaccurate TMs of the underwater image as the TM of R wavelength is computed based on the local highest values of Rchannel and TMs of BG channels are computed using UDCP [21]. Figures 8C(b) and (c) also show the TMs computed from R-channel using UDCP.

Overall performance evaluation
Objective and subjective evaluations are used for the assessment of restored images.

Subjective evaluation
Subjective assessment gives the idea of visual quality of picture and is mainly based on HVS.
The above-mentioned prior-based and network-based techniques for image restoration can just basically dehaze a hazy subaquatic image. However, these approaches cannot efficiently perform colour restoration for different subaquatic pictures, but the step of colour improvisation could be performed after the restoration of an image to enhance chroma, colour, and sharpness of final pictures. Figure 9A shows the original degraded image used for subjective assessment. Figures 9B(a) to (d) compare the restored results of various prior-driven algorithms. Such results are used for subjective evaluation of certain prior-driven approaches. Figure 9B(d) indi-

FIGURE 9B
Comparison of restored images cates that the best restoration performance is shown by IBLA. This algorithm can produce accurate TMs.

Objective evaluation
As there is the unavailability of reference subaquatic images, this review study chooses five types of quality metrics to assess the NR images to gauge colour fidelity, entropy, distortion, and contrast for undersea pictures [21]. The five parameters are given as NIQE, UIQM, BRISQUE, ENTROPY, and UCIQE. The calculated values of the above-mentioned quantities for different algorithms are shown in Table 1. A graphical representation of the data in Table 1  (i) Entropy: It is computed as the sum of the products of probability multiplied by log inverse of probability.
Equation (18) given below is used to calculate entropy or measure the picture information:  where H(Y) represents entropy, p(y) denotes probability distribution function of picture at location y of a pixel and k is number of grey level. UCIQE: UCIQE in CIE Lab is given by: where σ c is chroma standard deviation, c 1 , c 2 , c 3 are weighted coefficients, μ s is average saturation and con l is luminance contrast.
(ii) NIQE: It is calculated using natural scene statistic (NSS) features from patches having dimensions p × p of picture which is to be assessed, fitting these with MVG representation, and afterwards comparing this MVG fit with natural MVG representation. The criteria of sharpness is not used for the patches as decrease in sharpness in degraded pictures is sign of degradation.
Lastly, picture quality of distorted picture is shown as separation between the MVG fit and NSS model as shown below: where Σ1, Σ2 denote mean covariance matrices and v1, v2 denote the mean vectors of natural MVG representation and MVG representation of degraded picture.
UICM: Two components of colour related with chrominance YB and RG are used in UICM. The overall UICM metric used for calculating undersea picture colourfulness is given by: where μ α,RG , μ α,YG and σ α,RG , σ α,YG are asymmetric alphatrimmed mean and variance, respectively. UISM: it is calculated as shown below: where enhancement measure estimation (EME) is used for pictures with even background. Accordingly, EME is used to compute sharpness of edges and cofficients λ c combine EME measures in all the three colour components. UIConM: Poor contrast in subaquatic pictures is mainly due to backward scattering. Contrast can be calculated using log(AMEE) on the intensity image as shown below: log AMEE where a picture is divided into k1k2 blocks and ⊕, ⊗ and Θ are the parameterised logarithmic image processing (PLIP) operations. (I(max, k, l))/(I(min, k, l)) denote the relative ratio of contrast in each block. UIQM is then computed as UIQM defined in above equation has three parameters involved a, b and c and the selection of these parameters depends upon the application.

ADVANTAGES AND DISADVANTAGES OF CURRENT TECHNIQUES
Some of the state of the current state-of-the-art underwater dehazing schemes have been presented in Table 2. In this section, we list major advantages and disadvantages of the schemes listed in Table 2.

6.1
Underwater image dehazing and denoising via curvature variation regularisation As proposed by Guojia Hou et al., this technique not only dehazes a subaquatic image but also denoises it. It also produces satisfactory results and is capable of enhancing visual qualities of an image as well.
The shortcoming of the said technique is that it uses subaquatic DCP and RCP to estimate atmospheric light and TM. The prior-driven approaches such as DCP and RCP are error prone and minor inaccuracy in the prior data can lead to erroneous final result.
This approach yields output with greater value of UCIQE as compared to DCP. Moreover, BL estimation carried out using ULAP is quite accurate, and closely related to ground truth BL.
The demerits of this method are that its objective quality assessment metrics do not reveal pleasing results as depicted from Table1. Compared to other methods, pictures restored using ULAP show greater value of BRISQUE which should be minimum, and smaller values of other parameters which should be ideally higher.

Non-NMF and EC (S-NMF-EC)
This method is robust and effective as compared to SNMF. The shortcoming of this method is that it is not capable of reconstructing sudden change in target and reference pic-tures. Data or network-driven techniques can be utilised for such issue.

Network-based techniques (GANs and CNNs)
These approaches are used for the processing and operating on data which is complex and uses trained datasets. If the prior assumptions are not accurate, they make huge estimation error. To address this issue, network-driven techniques are used.
The performance of data-driven techniques greatly relies on training datasets. However, there is scarcity of benchmark datasets.

CONCLUSION AND FUTURE DIRECTIONS
This study discusses different prior-driven and network-driven algorithms that can help a researcher to get an overview and know-how of an underwater environment. We initially introduced the behaviour of light in a subaquatic environment namely, light absorption and light scattering. Then the fundamental principles of the subaquatic imaging model were discussed. We then discussed the image-restoration techniques under two categories: Hardwareand software-driven techniques (prior-and network-based approaches). At last, we provided a comparison of various restoration algorithms. In short, we proffered an outline of advancements and challenges in the field of single subaquatic image restoration. This study can aid researchers in this particular area. Based on the study of various state-ofthe-art algorithms for underwater degraded images, we conclude that processing the degraded images is quite challenging for real-time applications. Besides the flexibility and durability of underwater image-restoration techniques still need to get better.
The literature reveals that as on date, researchers struggle to obtain high-level subaquatic pictures for different purposes such as automation, salvage expedition, human-made structure survey, ecological analysis, sea organisms tracking, and real-time navigation. Therefore, subsequent work in this area should concentrate on the following facets: (i) Enhancing robustness and computation efficiency for real time scenarios Approaches that are built on IFM can recover true scenes but consume immense time to compute two ocular parameters. Thus, the need to develop computationally efficient imagerestoration techniques for various types of undersea applications is a prime area for future course of research.
(i) Forming an effective quality evaluation metrics Although, several picture quality assessment metrics have been put forward, yet only handful of them are suitable for undersea pictures. Further study needs to be carried out to improve referenceless assessment representations.
(i) Forming an adequate dataset for subaquatic images Precision in computed TMs and BLs using various techniques and effectiveness of these techniques are observed and compared via subjective evaluation due to scarcity of benchmark dataset. Hence, construction of a public undersea picture benchmark dataset is mandatory and work needs to be carried in this direction. Table 2 shows some current subaquatic image-restoration techniques and Figure 10 shows comparison of some priordriven methods graphically.