Study of statistical methods for texture analysis and their modern evolutions

Texture analysis is widely performed in the current time as it is considered as an intimate property of the surface. It is widely used in the field of image processing, remote sensing applications, biomedical analysis, document processing, and so on. In this investigation, we present a detailed study of four different methodologies that have been developed for texture classification. These methodologies include gray level cooccurrence matrix (GLCM), local binary pattern (LBP), autocorrelation function (ACF), and histogram pattern. The detailed investigation on these methods suggests that GLCM is best for analyzing the surface texture, land‐use/landcover classification, and satellite data processing. LBP is widely used to analyze the facial features of an individual. The autocorrelation is used to identify the regularity of the textured surface. Finally, through histograms, one can visually identify the changes developed while analyzing the texture of the image data. Furthermore, we present a brief summary for newly developed texture classification techniques such as binary Gabor pattern, local spiking pattern, SRITCSD method, scale inversion, and deep perception models for texture analysis. Some benchmark texture datasets used in image processing are also discussed in this work.


INTRODUCTION
The texture is an important descriptor of an image as it uses a spatial arrangement of gray values to analyze an image. The statistical methods describe texture efficiently and are considered as one of the earliest methods for texture analysis of the image. 1 Based on the number of pixels statistical methods can be further classified as first-order statistics, second-order statistics, and high-order statistics. 2 First-order statistics do not provide sufficient information from the point of human visual perception. First-order statistics include mean, variance, SD, skewness, and kurtosis. 3 The histogram is also considered as first-order statistics as it used the central mean value. 4 Second-order statistics use the neighborhood relationship between a pixel of interest (POI) and their neighborhood pixel. The human beings are quite sensitive to second-order statistics as they provide sufficient information from the point of human visual perception which is the limitation of first-order statistics. 5 Second-order statistics include gray level cooccurrence matrix (GLCM), local binary pattern (LBP), and autocorrelation function (ACF) these methods are popularly used for describing the texture. 6 High-order statistics do not provide any information from a spectral and spatial point of view thus they are not considered for image interpretation. 7 Today with the advancement in the field of satellite remote sensing, biomedical research, document processing, automated inspection sufficient information about various spatial parameters can be obtained through texture classification. 8 In this work, we have presented a detailed study on the methodology, application area, and evolutions of statistical methods GLCM, LBP, ACF, and histograms. Through this study, we have concluded that for statistical analysis of texture these methods have proved to be very useful. These methods are widely used for texture quantification and change detection. Among all the discussed techniques we found GLCM as a promising and most efficient technique for texture classification. The main limitation of the GLCM approach is high matrix dimensionality and a high correlation between Haralick features. Nowadays various new techniques for texture classification have also been developed such as binary Gabor pattern (BGP), local spiking pattern (LSP), SRITCSD method, scale selection, deep perception models for texture analysis, energy variation method, artificial neural network (ANN), deep learning through an ANN, and so on. The newly developed methods can be fused with the existing methods to gather more information about image texture.
The objective of this work is summarized as follows.
• Making a familiar scientific community and new researchers with texture classification techniques used in past and current.
• Spreading awareness with the specific use of a classification technique, for example, such as GLCM is used broadly for remote sensing applications and LBP is used for face recognization.
• Developing the idea for a new technique which can be a combination of two or more existing techniques.
• Providing information regarding various datasets available for image processing, computer vision, machine learning, and remote sensing applications. This work also presents a merger of some texture classification techniques widely used since the early 1980s along with some newly developed texture classification techniques developed till 2019. This work provides good information to young researchers, scientists, technocrats about various texture classification techniques from the development of GLCM to the evolution of deep perception models for texture classification.
The organization of this article is as follows in Section 2 we have discussed the methodology, advancements, and application area of GLCM, LBP, ACF, and histograms. Section 3 presents information on the newly emerged techniques for texture classification. Section 4 presents information on the various datasets used in image processing and satellite remote sensing applications. Section 5 presents a discussion on the outcomes of this study. Finally, Section 6 presents the concluding remark about the proposed work.

Gray level cooccurrence matrix
GLCM was invented by Haralick, he proposed a set of 14 different texture features for texture classification. 9 Afterward, another scientist Gotlieb classified these 14 different features in a set of four different features, that is, texture visual features, correlation measures, entropy measures, and statistical measures. 10 GLCM is widely used in Earth data processing, 11 fabric defect detection, 12 soil moisture estimation, 13 and so on, thus this technique has gained a lot of popularity in remote sensing applications. While calculating GLCM from an input image two factors are considered most important, that is, distance "d" and orientation angle " ". Figure 1A represents an input image of dimension 4 × 4, Figure 1B represents GLCM of the input image and Figure 1C represent the normalized form of the GLCM. The spatial relationship between the pixel is defined by the array of offsets, where d is the distance, is the orientation angle between the POI and neighboring pixels. Figure 2 shows the orientation direction for the POI. GLCM of the input image is usually calculated along four different directions and at four different distances along the POI. These different directions are orientations angles, that is, 0 0 , 45 0 , 90 0 , and 135 0 at distance d = 1, d = 2, d = 3, and d = 4, respectively.
The orientation offsets combination along with distances is shown in Table 1.

Related work
Haralick et al 9 developed GLCM and a set of 14 different features for classifying the texture of an image. He developed a spatial relationship of image pixels as a statistical function of the gray level which is used as a quantitative measurement for the texture of an image. Julesz 14 firstly uses the spatial relationship of the gray level and their cooccurrences in statistical form for texture description. Investigation of Sutton and Hall 15 is also based on the statistics of gray level pairs, but Deutsch and Belknap 16 presented a more elaborate version of GLCM which combines the information in 2 × 2 matrices using different separation and orientation between gray level pairs. GLCM is calculated by using the frequency of occurrence of the image pixels followed by the same or another gray level. The result of GLCM created from an input image of the 4 × 4 dimension displaying the frequency of occurrence of gray level pairs in matrices is shown in Figure 1B. The normalized form of the GLCM is shown in Figure 1C. Mathematical notation of the statistical feature obtained from Haralick GLCM is expressed from Equations (1) to (14), more details about GLCM can be obtained from. 17,18 Here p(i, j) represents the pixel of an image having x number of rows and y number of columns with coordinates (i, j). Gray tone "i" followed by another gray tone "j" with distance "d" having a relative frequency is expressed as p(i, j).
1. Angular second moment (ASM) where |i − j| represents the difference of gray level between corresponding pixels, that is, n, K g is the number of distinct gray levels in the quantized image. 3. Correlation Here x, x and y , y are mean and the SD of rows and columns, respectively. 4. Sum of squares: Variance Here is the mean of the matrix. 5. Inverse difference moment (IDM) 6. Sum average where p (x+y) (k) = 8. Sum entropy 9. Entropy 10. Difference variance where 11. Difference entropy 12, 13. Information measure of correlation.
Where HX and HY are entropies of p x and p y and 14. Maximal correlation coefficients where G(i, j) = ∑ k p(i,k)p(j,k) p x (i)p y (k) and k = 2, 3, … … 2, k g. Weszka et al 19 extended Haralick's concepts on the cooccurrence matrix. Haralick et al 9 developed GLCM as a function of distance and angle, Weszka proved that features derived from GLCM matrix give improved results when the distance of small magnitude is used between the gray level pairs. Zucker and Terzopolous 20 explained the procedure of best distance and angle selection for the cooccurrence matrix. Davis et al 21 proposed a new advanced approach over GLCM called generalized cooccurrence matrix (GCM). GCM calculated features to give more accurate results in comparison of GLCM for texture analysis of the image. However, it could not become more popular than GLCM. Till 1980 GLCM derived statistical features were began to be used in analyzing aerial, terrain, microscopic, and satellite images. Haralick et al 9 used features calculated from GLCM to identify aerial images and eight other terrain classes with 82% accuracy. Chen and Pavlidis 22 state that GLCM can be combined with a "split and merge algorithm" and can be used for textural segmentation of the image. In the early 1980s, many spatial statistical approaches came into existence, competing with GLCM but GLCM proves successful in maintaining its reliable application in various fields and even found applicability in the field of remote sensing. Hsu 23 introduced another statistical method referred to as simple statistical transformation (SST), this method is computationally simple with respect to GLCM. Jensen 24 compared the performance of SST and GLCM by classifying six land-use types. Wang and He 25 introduced another new statistical-based approach for extracting spatial information referred to as texture spectrum (TS). This is basically developed to overcome the limitation of GLCM. Gong et al 26 made a comparison between three spatial feature extraction methods, that is, GLCM, SST, and TS for land-use classification of high-resolution visible (HRV) SPOT satellite multispectral data. The result of this comparison indicates that GLCM produces good classification results over other methods.
Analyzing sea ice is a critical task, from scientific and operational perspectives. The ability of the texture parameters to distinguish ice into several categories through GLCM has been examined by a number of researchers. 27 The use of GLCM parameters for classifying sea ice through radar and SAR images started in the late 1980s. Holmes et al 28 used two features entropy and inertia to classify sea ice images. Hirose et al 29 used GLCM features for texture description of multilayer and new ice. Furthermore, in Reference 27 features calculated from GLCM are combined with a gray tone for the classification of sea ice from the SAR images. Gray tone and GLCM have limitations that they cannot be used alone to categorize ice. Haralick et al 9 developed 14 features from GLCM to describe texture but nowadays only a few of them are used. Features derived from GLCM are tested at different grounds to prove their usefulness, many fusion experiments with GLCM are also carried out in order to achieve an improved version of original GLCM. Gotlieb and Kreyszig 10 concluded that composite descriptors formed by combining Haralick's 14 features with each other perform better than a single descriptor. The composite descriptors of more than the fourth-order have no additional usefulness in analyzing texture. Kushwaha et al 30 analyzed the burnt forest areas with the assistance of GLCM features. They show that combining features such as tone, entropy, and IDM gives better results for discriminating forest areas. Baraldi and Parmiggiani 31 proposed a functional difference among the energy, contrast, variance, correlation, entropy, and IDM to analyze the texture. Their research also suggested that energy and contrast are more suitable for texture analysis. Terzopolous and Zucker 32 combined the features calculated from GLCM and GCM to increase the performance for texture analysis. Their combined system performs better (69%-90%) for detecting Osteogenesis imperfecta (a disorder of the human body through comparison of visual images features) while Kovalev and Petrou 33 proposed a spatial multidimensional GLCM-based approach for object recognition and matching.
Three different implementations of GLCM are developed and presented in Reference 34, namely, mean displacement matrix (MDM), mean displacement orientation matrix (MDOM), the x 2 -optimal displacement and mean orientation matrix (ODMO). The result of the comparison between these three techniques concluded that MDOM gives a better result than ODMO and MDM. One important conclusion that is derived here is that distance factor "d" is more important than the orientation factor " " while calculating GLCM. Texture provides a quantitative assessment of spatial information which is widely used in satellite remote sensing. 35 The successful classification of radar images based on GLCM features is presented in Reference 36. Sea ice is classified by Holmes et al 28 and urban areas are classified by Baraldi and Panniggiani 31 using GLCM. Furthermore, Marceau et al 37 used the GLCM features for land-use/landcover classification through HRV SPOT imagery. Classification of SAR sea ice images is described using GLCM by Barber and LeDrew, 38 while Shokr 27 fused the features of GLCM and gray tone for improved classification of sea ice SAR images. Kushwaha et al 30 used GLCM for the classification of forest data obtained from the IRSLISS-II sensor. 34 In the wavelet transforms a multiresolution technique is used to overcome the limitation of the original GLCM. GLCM features derived from wavelet decomposed image is referred to as wavelet cooccurrence features, it proved its superiority over single resolution techniques such as TS, original GLCM, local linear transform, and so on. Arivazhagan et al 39 highlight wavelet cooccurrence feature to discriminate textures of the images. Wavelet cooccurrence features can also be used for the texture segmentation of images. Many strategies are presented to extend the original GLCM to multiple scales. The performance of extended GLCM is successful in achieving improved results over other descriptors. In this respect, Hu and Zhao 40 45 The texture is an important parameter to analyze medical images. 46 The simplicity of GLCM-based texture inspection motivates researchers to use it in diagnoses from medical images. Magnetic resonance images (MRI) which are not visually accessed can use texture analysis method to extract information. In Reference 47 GLCM is used to analyze the texture pattern in brain images, this technique is applied to analyze brain images of patients suffering from Alzheimer's disease. However, the GLCM matrix application is not limited to MRI images but also proved helpful in the detection of other health-related conditions. Castellano et al 48 state that GLCM-based texture features are useful in the analysis of the medical images. GLCM derived texture features can differentiate masses and nonmasses tissue in digital-mammograms. 49 The features calculated from GLCM constructed at the 0 0 are best suited to discriminate masses tissues. Now GLCM is combined with other methods for extending its scope and features. Gabor filter and GLCM are combined with other methods to obtain improved quantification of texture features. 50 Arivazhagan et al 51 derived curvelet statistical features and curvelet cooccurrence features from the subband of curvelet decomposition and used it for classification of texture. As a result, a high degree of success in classification is obtained. Nosaka et al 52 introduced the improvement of GLCM to extract image features by extending LBP with the assistance of GLCM. Results proved that their proposed method for face recognization through texture classification shows better performance than the conventional LBP. Many statistical methods are used for the inspection of the machined surfaces, as texture analysis becomes popular in machine surface analysis over structural texture analysis methods. 53 Advantages a. The GLCM is originally designed for texture analysis of two-dimensional (2D) images but today its scope is extended as scientists are using GLCM features to extract texture information from three-dimensional (3D) surfaces, this procedure is known as 3D GLCM.
b. It is a measure of the different combination of pixel brightness values that occurs in an image. c. Earlier GLCM is used in satellite data processing but nowadays it is also used for 3D seismic data analysis. d. GLCM features can be obtained for a single orientation as well as combining all the orientation together making GLCM direction independent.
Shortcoming GLCM is a very useful technique for texture analysis the main shortcoming of the GLCM technique is the "computation cost," which uses impractical implementation for a pixel to pixel calculation. Computation of GLCM features is a time-consuming process this shortcoming can be overcome by combining GLCM with Sobel operator.

Local binary pattern
In LBP texture classification is having a direct impact on the visual image descriptors and is used in computer vision and biometric recognization. This methodology came into existence in 1994. Since then it is widely used for surveillance and security. 54 LBP is also used for the color texture descriptors 55 and face recognition. 56 LBP is a statistical method that summarizes the local structure of the image efficiently by comparing each pixel with all surrounding neighboring pixels, 57 through LBP one can easily define the image texture by two complementary measures, that is, "local spatial patterns" and "grayscale contrast." An example to calculate LBP code for the center pixel value is shown in Figure 3. Calculation of the LBP code for a center pixel is expressed by Equation (15) LBP code calculation = (1 × 2 0 LBP code for a central pixel value is 87.

Related work
LBP originated in the early 1990s for texture analysis after the development of GLCM, ACF, and histograms but in a very short duration of time, it gained popularity in image processing. Biometric identification, security surveillance, computer animation, and so on are some of the application fields of LBP. 58 Due to its simplicity, it has attracted the attention of many researchers. The success of LBP in image analysis can be justified by the fact that in the Beijing Olympics 2008 Chinese officials use the LBP technique for biometric verification of a huge mass of visitors from the whole world. 59,60 Hardwood et al 61 was the first to introduce nonparametric LBP. Ojala et al 57 extract texture features with the assistance of LBP. An example of the original LBP with 3 × 3 neighborhood pixels used in Reference 57 is shown in Figure 3.
In original LBP, LBP operator is represented by LBP (L,r) (where L is the number of neighboring points and r is the radius of the circle formed by neighboring points) that produced 2 l a number of output values. Centre pixel is used for thresholding the eight neighboring pixels surrounding it. In Figure 3 central pixels, LBP code with coordinates (m, n) is given by Equation (16).
where S (z) is a threshold function.  After 2000 LBP has undergone several advancements, Huang et al 58 try to organize all the advancements of LBP particularly for facial expression analysis they later categories LBP methodology into the following five categories as tabulated in the Table 2.
One of the limitations of LBP is that its operation is only 2D. The development of DLBP and VLBP extended the ability of LBP and its scope of application. DLBP is used successfully for texture description and face recognition. 65 DLBP uses the most frequently occurring patterns to calculate the texture information. This method is effective to analyze 3D complex shapes, curvatures, edges, crossing boundaries, and corners. Zhang et al, 66 Lei et al, 67 Guo et al 68 work on VLBP. VLBP has the advantage of describing texture for the video sequence. VLBP extension is successful to capture dynamic information. Original LBP is used to describe the texture, but with the time it has occupied an important and prominent place in other applications. Table 3 summarizes the different application areas of LBP.
Qian et al 81 extended the discriminating power of LBP by introducing the pyramid transform domain (PLBP). The discriminatory power of the PLBP is higher as compared with another extension of LBP. Till date, enough work has been presented to prove the LBP-based technique as an effective and faithful for face recognition. Yang and Chen 82 compared the performance of holistic local binary pattern histogram (HLBPH), holistic local binary pattern image, and enhanced local binary pattern histogram (ELBPH) for face recognition and concluded that HLBPH is simplest LBP technique for face recognization. ELBPH is the most complex LBP technique but has an advantage over HLBPI in terms of feature extraction.

Autocorrelation function
The quantitative assessment of the regularity of the texture of an image is given by an ACF, texture having strong regularity shows peaks and valley in ACF. 83 ACF of an image I ACF (p, q) having N numbers of pixels is expressed by the Equation (17), where (p, q) represents the original image coordinates.
where f (p, q) is the 2D brightness function and f (p ′ , q ′ ) is the dummy variable of the integration. The dimension of the original image having coordinates (p, q) represents the location where ACF gives coordinates (p ′ , q ′ ). This point is considered as the endpoint of the neighborhood vector. Autocorrelation describes how well an image coordinates with itself under the condition where the image is displayed with respect to itself in all possible directions. In the presence of normal texture, ACF shows peak and valleys. It is having a relationship with power spectrum of the Fourier transform (FT). It is also responsive to noise interference. ACF is not suitable for the texture having an irregular arrangement of elements. Primitives are the clusters of the gray level of the image and ACF uses this primitive as information. ACF feature gives information on tonal primitives. If the primitives are large, autocorrelation will drop gradually, if primitives are having small autocorrelation even then it will drop sharply. 83 ACF is used to detect the amount of nonrandomness that got developed in the image data. This technique is also used for identifying the number of changes developed in the pattern and textures of any surface. 84

Related work
To establish a relationship of ACF with texture Keizer 85 carried out simulations on seven satellite images of the Arctic region and he concluded that the spatial organization of the image pixel can be analyzed by ACF. His research assumes autocorrelation as a function of distance. For an image ACF " " at distance "d," is expressed by Equation (18).
where "e" is the exponential coefficient. Yoglom 86 showed that ACF and power spectral density functions are FT of each other. Shelton 87 suggested the use of ACF in pattern recognition. The limitation of using ACF up to second order is overcome by McLaughlin and Raviv 88 by proving the effectiveness of nth order in pattern recognition. Till 1980 ACF was having inherent problems due to which it was considered inferior to the GLCM for texture description. 89 This happens because no research was carried on ACF and at the same time a lot of work was carried out by researchers on GLCM. Extension of ACF to higher order expanded its application scope. By extending the order of ACF, the image can be characterized more closely in comparison to second-order ACF. In this respect Kreutz et al 90 increased the order of ACF up to third order, they use the shift-invariant property of ACF to make it computationally less expensive. A similar approach is carried out by Kurita et al, 91 Hotta et al 92 but still, ACF's main disadvantage is its high computational factor and its incapability to be used in larger domains. The extension of ACF not only increases its applicability but also increases the high computational factor. Popovici and Thiran 93 avoided the high computational factor in the higher order of ACF by considering the inner product of input data. They used higher order ACF for pattern recognition and perform principal component analysis (PCA) decomposition in autocorrelation space. Toyoda and Hasegawa 94 increase the order of higher local autocorrelation up to 8 and 223 mask patterns which are used to extract the features from the image. The extended higher local ACF outperforms other methods such as Gaussian Markov random fields, Gabor features and LBP. 95 Extension of earlier ACF from 25 mask patterns to the 223 marks pattern makes the ACF capable of characterizing image more closely and accurately.
Earlier works show that ACF can be used for pattern recognition but after 1990 with the extension of ACF to a higher order, it becomes useful for the measure of the degree of coarseness and smoothness in the textured surfaces. Kurita et al, 96 Goudail et al, 97  Advantages a In ACF it is not necessary to perform segmentation of the image before its analysis. b One of the main features of this technique is that it can be analyzed in a user-independent manner. c One can use a technique such as scaling, linear filtering, addition, thresholding, background subtraction, and sampling through ACF. d In ultrasound imaging autocorrelation is used to check the flow of blood. e In remote sensing applications, ACF is used to compute the seismic attribute of a 3D seismic surface of the underground.
Shortcoming a In the time series regression analysis autocorrelation cause a problem which shows that the relevant variable is missing. b It increases the variance of the coefficients which need to be estimated. c ACF makes "t-statistics" larger than their real size.

Histogram
They look similar to graphs but shows the intensity of the pixel value. In the histogram, the x-axis shows the gray level intensity and the y-axis shows the frequency of that intensity value. Figure 4A,B represent multispectral pre-and post-images of shrinking Aral Sea, the gray level representation of pre-and post-images are shown in Figure 4C,E, respectively. The histogram plots for the pre-and post-image of the Aral Sea are shown in Figure 4D,F, respectively, which clearly represents the change in terms of change in gray level intensity values. Through histogram, one can easily identify the changes developed in the textured surface of the image. The histogram is the numerical representation of the data in the form of a plot or curve. This technique of data representation was developed by "Karl Pearson" it looks similar to bar plots but is completely different from the bar plot. [105][106][107] Histogram analysis is quite useful in medical irregularity detection, 108,109 privacy maintenance, 110 data interpretation 111 resonance imaging, 112 image processing, 113 and so on.

Related work
Texture analysis through histogram study is performed to measure the distribution of intensity values in all parts of the image. Histograms are simple software tool which is used extensively in industries to inspect surface defects. Histogram equalization (HE) is a preprocessing task applied to enhance the performance of image analysis. 114 Different illumination effects, contrast, sharpness, and so on introduce the monotonic transformation of the image function. 115 Sklansky 116 states that HE is essential to extract texture features from diagnostically important regions of Xero-mammograms. HE is found useful to increase the sensitivity of texture analysis in mammography. 117 Therefore, a brief summary of HE is added below as HE contributes to maximizing image information for texture analysis. HE changes the input brightness of the image but also introduces unnecessary visual artifacts.
One of the earliest efforts in order to enhance image contrast and preserving the mean brightness of the image is presented in Reference 118 they proposed a bi-histogram equalization method (BBHE). In this method, the image is divided into two subparts, that is, mean brightness and the process corresponding to the histogram of two subparts. 118 Wan et al 119 proposed an improved extension of BBHE, that is, dualistic subimage histogram equalization (DSIHE) by replacing the mean brightness factor by the median values to categorize the image into two parts. This method performs better than BBHE in preserving brightness and enhancing image contrast it also proves more efficient in preserving information (entropy) of the original image, as a result, HE is found comparable to BBHE.
Chen and Ramli 120 introduced the improvement of BBHE referred to as minimum mean brightness error bi-histogram equalization (MMBEBHE) which effectively preserves a higher degree of brightness. It outperforms both the BBHE and DSIHE method. Chen and Ramli 121 introduced a new method referred to as recursive mean separate histogram equalization (RMSHE) in which subhistogram regions are further divided based on their respective means. RMSHE provides increased brightness preservation and more natural contrast enhancement of images. Sim et al 122 introduced another technique similar to (RMSHE) called recursive subimage histogram equalization (RSIHE) which replaces mean brightness by median brightness. Wang and Ward, 123 Kim and Paik, 124 Ooi et al 125   Pietikäinen et al 127 combined color histogram feature with LBP and states that their combination is suitable for defect detection in the wood inspection. Later it was also proved by Stark 128 that the joint color histogram and LBP do not perform accurately for texture discrimination as they can perform accurately when used alone. Broadhurst et al 129 conclude that the histogram of local regions of the image is able to characterize the appearance of the image model which is better than the histogram of global regions of the image. One of the significant achievements is to develop a multidimensional histogram which is having advantages over the original histogram. Xie et al 130 introduced multidimensional histogram methods to inspect surface defects. PCA is performed on the referenced good sample image to form the eigenspace. Color features projected on the eigenspace to obtain a multidimensional histogram, a color feature of the test image is also projected on the same eigenspace. A comparison is made between the reference and the test image. Finally, a comparison is made in their histogram distribution.
Ng 131 shows gray level distribution ranges from unimodal to multimodal for surface defects. Otsu method gives satisfactory results for threshold selection with a multimodal histogram but fails when the histogram is unimodal. 132 "Valley Emphasis Method" is an improvement over the earlier Otsu method as it is simple, fast and effective in detecting the defects which have a small probability of occurrence and also capable to determine the threshold value for both unimodal and bimodal histogram distribution.
Advantages a. It is widely used in image processing in the analysis of different formats of image. We can predict image quality by looking at its histogram. b. It is frequently used for brightness purposes. It is having a number of applications in image brightness and widely used in adjusting the contrast of an image. c. It is used to equalize an image and also help in understanding the skewness of data. d. It is used for thresholding purposes which are mostly used in computer vision and pattern recognization. e. It can be used to examine whether the central value of the data is matched with the predefined targets or not. Thus through this one can easily understand whether the centering of the process is ok or not. Shortcomings a. They are used globally for analysis of the entire image and scenes. This sometimes becomes a drawback while analyzing a complex image with extremely high resolution. b. The spatial information about the intensity values of the image pixel is not considered while analyzing the histogram. c. Different scenes can develop the same type of histogram pattern. d. When the spread of histogram is not sufficient it can introduce major changes in the image gray levels. e. It is not able to preserve the overall brightness which is quite significant in consumer electronics applications.

OTHER METHODS FOR TEXTURE ANALYSIS
Besides the statistical approach, there are some other approaches also for texture analysis. These approaches are categorized as a structural approach, a model-based approach, and a transformation-based approach.
In the structural approach, the texture is defined on the basis of pixels, regions, and on the basis of physical shape. Structural methods firstly analyze texture on the basis of the pattern. Once the "primary texture" is detected the calculation of the statistical properties of the "primary texture" is performed. This method of texture analysis is suitable for texture with regular structure but for irregular structures, this method is not suitable.
The model-based methods describe texture by comparing it with some standard model design. The model-based methods are used for texture modeling purposes. In this method first, a model of texture is defined then a standard model is used to classify and compare that predefined texture model. Some popular models for texture analysis are Autoregressive model, Fractal model, Gibbs random field, Markov square theory, and Hidden Markov model.
Transformation-based models are used when the texture is not defined in the original space then these models are used to convert the texture from one space to another space. In the newly transformed space texture can be easily defined, some of the transformation models are Gabor transformation, Curvelet transforms, Wavelet transforms, and Ridgelet transform. Table 4 summarizes texture classification approaches, subapproaches, and methods for texture classification.
In this section, we have presented a "State-of-the-art texture analysis techniques". These newly developed techniques are a combination of two or more existing techniques or completely a novel approach. A short summary of these techniques is described as follows.

Energy variation
Armi and Fekri-Ershad 147 developed a methodology for texture classification by studying variation in the "energy" feature. In his proposed method he used three texture classification techniques, that is, GLCM, LBP, and edge segmentation. The Now the f ′ is extracted by calculating the difference between the initial energy (IE) and the operator output (OO), that is, f (combained).

Binary Gabor pattern
Huo et al 148 proposed a texture analysis method by combining LBP with a Gabor filter for IRIS transcoding. They used Canny edge detection and Hoff transformation to estimate IRIS boundaries. In their experiment, they used 108 IRIS images of 80 individuals and seven different images of each eye with a total of 756. They concluded that when LBP and Gabor filters are used separately the quality of the results are compromised but show the promising result after combining both techniques.

Local spiking pattern
Du et al 149 developed a novel method of texture classification on the basis of rotation and illumination. They proposed a new descriptor for the image texture known as "LSP". This proposed descriptor uses a 2D artificial neuron network. They used the "Outex texture dataset" in their research work and come to the conclusion that LSP outperforms other texture descriptors. Figure 5 represents the "Spiking cortical neuron model" developed by Du for texture classification.

SRITCSD method
Chang et al 150 developed a new texture classification technique using "single value decomposition (SVD) and discrete wavelet transform (DWT)". This developed technique uses a support vector machine algorithm to perform image classification. Thus the newly developed texture classification technique is named SRITCSD. This technique uses the following steps to perform texture classification. First, the texture of the image is improved using the SVD technique. Then texture features are extracted in the DWT domain meanwhile particle swarm optimization is used to optimize this method. Results conclude that the SRITCSD method can outperform other methods for texture classification. The design of the developed SRITCSD model by Chang is shown in Figure 6.

Scale selection
Liu et al 151 analyzed the change in the texture through interclass variation. This variation includes illumination, rotation, and viewpoint. They noticed that a slight change, in contrast, can change the texture appearance completely. Thus texture variation due to changing scale is "hardest to handle". They proposed a network called GANet which used a genetic algorithm to change the background filter. They have developed a new dataset called "Extreme-Scale Variation Texture" to test the performance of their system. Their developed system outperforms several existing texture classification techniques by more than 10%. Figure 7 shows the proposed model for GANet developed by Liu.

Deep perception models for texture
Zheng et al 152 performed a series of psychophysical experiments whose objective is to develop a link between texture features and human visual perception. Since human vision is sensitive to some texture feature only. The objective of their work is to recognize those texture features which are sensitive to human vision. To achieve this objective they developed a set of "deep architectures to learn a compact representation of the texture perceptual features". Thus their developed model shows an advantage to deep architectures over feature learning models.

BENCHMARK TEXTURE DATASETS
During the study of various texture classification techniques, we come across various image datasets used by the researchers and scientists in their study for classification and analysis purpose, so in this section, we are presenting some popular image datasets which are broadly used for texture classification purposes.

Brodatz dataset
Brodatz dataset is a very popular image dataset which includes natural textures provided by Brodatz. This dataset contains a total of 155 images out of which 130 images have a resolution of 512 × 512 and 25 images are having a resolution of 1024 × 1024, respectively. This dataset is having a limitation such as single illumination and viewing directions. 153 Some texture images of this dataset are shown in Figure 8.

Outex dataset
This dataset is developed by the researchers of Uulu University, Finland in 2002. This dataset is the combination of both natural and artificial texture images. It contains 320 texture images both microstructures and macrostructures. The images provided under this dataset is useful for image processing operations such as segmentation, image recovery, and image classification. The images shown in Figure 9 are in color having a rotation of 5 0 , 10 0 , 15 0 , 30 0 , 45 0 , 60 0 , 75 0 , and 90 0 with a resolution of 100 dpi. 154  Figure 10. 155

Columbia University image dataset (COIL-100)
This dataset contains 100 color images of different objects with distinct shapes and sizes specifically for the image processing and computer vision. 156 Sample COIL-100 images are shown in Figure 11.

NASA Earth observatory
NASA Earth observatory is an online publishing outlet of National Aeronautics and Space Administration, USA. This online outlet is one the biggest resource of satellite images of different formats for commercial and research purposes. The collection of various multispectral images acquired from Landsat, MODIS, Aqua, and Terra satellite are available at NASA Earth observatory, besides these images cover several global events such as flood, drought, soil erosion, urbanization, forest fire, shrinking of seas, and so on, The data acquired from this resource is used for image processing and remote F I G U R E 12 Sample satellite images NASA Earth Observatory 157 sensing applications. 157 The authenticity of the data can be understood from the fact that even the Indian Institute of Remote Sensing (IIRS) provides data to young researchers through NASA Earth observatory. 158 Some sample satellite images with different resolution and orientation angles are shown in Figure 12. Some other popular datasets include wood species recognization by computer vision and intelligent systems research group UTAR Malaysia, 159 TILDA textile texture database, 160 SGP 97 ARM soil texture. 161 Details regarding NWPU crowd datasets, GDXray, SVIRO, wood space, MVTech AD, CADP, urban object detection dataset, CCPD, VisDrone, ImageNet, Deep fashions, LogoNet, VEDAI, and so on, can be obtained from 162 which are considered as some popular datasets used in image processing.

DISCUSSION
Statistical methods are classified into first-order statistics and second-order statistics. First-order statistics are Mean, Variance, SD, Skewness, and Kurtosis which provide information about the gray level pixels of the image. The variance is the measure of the width of the histogram which quantifies the deviation in the gray level of histogram through mean. Skewness measure asymmetry developed around the Mean and Kurtosis through histogram sharpness. Thus all first-order statistics derive their information through histogram analysis. Second-order statistical methods discussed in this article are GLCM, LBP, ACF, and histogram. GLCM is also known as a second-order histogram. The most important variables in the computation of GLCM is proper spacing between pixel pairs, that is, distance "d" and the orientation factor " " in different directions, that is, 0 0 , 45 0 , 90 0 , and 135 0 . Fourteen features extracted from GLCM are used to quantify texture characteristics and are organized into four groups.
1. Based on visual, textural characteristics: ASM, contrast, correlation. 2. Based on statistics: variance, IDM, sum average, sum variance, difference variance. 3. Based on information theory regarding particular entropy: sum entropy, entropy, difference entropy. 4. Based on information regarding the measure of correlation: information measure of correlation, maximum correlation coefficients.
GLCM is thus a powerful descriptor of the texture of the image and its major advantage is that it is invariant to monotonic gray tone transformation. However, it is not able to capture shape aspects of the image primitives. GLCM is widely used in different fields which includes satellite remote sensing, biomedical analysis, automated inspection, document processing, and so on.
Another useful statistical method for texture analysis is ACF. It is considered as the measure of the degree of coarseness in the image. It is based on the repetitive nature of the position of the texture elements. Thus it is mostly used to measure fineness or roughness of the image texture. But in spite of this feature, it is not considered good as GLCM to extract textural information.
LBP operator is introduced as a complementary measure for local image contrast. The LBP describes the texture with the smallest primitives called "textons" (or, a histogram of the texture elements). LBP is one of the most used approaches in practical applications, as it has the advantage of simple implementation and fast performance. LBP operator is used as a complementary measure for the local contrast. The most successful application of LBP includes face and gesture recognition. The strength of LBP is computational simplicity and robustness toward illumination effects. However, its sensitivity to noise adds a limitation to its performance.
Histograms are related to the first-order image statistics as they are used to visualize the quantification of texture features. The histogram is important from a "visual point of view" as any change developed by the changing texture can be easily identified through the histogram plot.
Some newly developed methods such as Energy variation, BGP, LSP, SRITCSD method, scale selection, deep perception models for texture analysis are discussed in Section 3. They provide us vital information about alternative procedures for texture evaluation. They also provide us information about the combination of two methods to enhance texture analysis ability. Such a combination can be extended to develop some new methodologies. In Section 3 information regarding various datasets for texture, analysis is provided which are used in image processing, computer vision, and remote sensing.
Finally, in this work, we started with the advancement in the field of image processing from the early 1980s through texture classification techniques such as GLCM, LBP, ACF, and histogram. These techniques are explored a lot by the researcher and scientist and even today play a prominent role in texture analysis, computer vision, and remote sensing. Newly developed texture classification techniques such as BGP, Energy variation, LSP, SRITCSD method, scale selection, deep perception models for texture analysis are the result of recent research in the field of image processing and are developed till the year 2019. Thus in this article, we have presented a detailed study of the evolution of various prominent texture classification techniques over a period of 50 years.
In the future, the combination of the existing texture classification technique with some newly developed techniques is possible which can outperform several existing texture classification algorithms. The datasets mentioned in this article can be used for several applications related to image processing. Thus this article provides exhaustive and useful information about texture classification techniques, procedures, and datasets.

CONCLUSION
The article presents a detailed study of GLCM, LBP, ACF, and histograms pattern for texture analysis. Newly developed techniques provide us information about the combination of two or more techniques. Among the presented statistical methods, GLCM is the most efficient method to extract texture features for classification a discrimination purpose. LBP is emerging as a powerful tool for object recognization and face analysis. Much work has to be done to reduce the computation factor of ACF but ACF is still not considered as good as GLCM for texture analysis. The histograms are easy to compute and also able to extract some important texture features related to first-order image statistics. Finally, with the newly developed techniques such as deep learning, SRITCSD technique, scale selection, LSP, BGP texture analysis have reached a new level.