Features’ value range approach to enhance the throughput of texture classiﬁcation

The deﬁnition of an image’s category from a database with huge texture categories needs massive computation and time cost. Existing texture classiﬁcation works focus on texture representation to improve the accuracy and efﬁciency of classiﬁcation. This research wants to reduce the categories of the main classiﬁer to decrease the comparison time of classiﬁcation. To overcome computation time, a features’ value range (FR) approach to enhance the throughput of texture classiﬁcation is proposed. The proposed approach decreases the number of candidate categories as a pre-classiﬁer in a two-step serial classiﬁcation. With the decrease in the number of candidates, the main classiﬁer can work on a few categories to ﬁnd the ﬁnal category. Here, conﬁguration parameters are deﬁned and some criteria are proposed for evaluating the FR approach. The performance of the FR is evaluated in the presence of different levels of Gaussian noise. Finally, it is shown that using effective features (EF) and hardware implementation approaches can extend the applicability of the FR approach. Experimental results depicted that the throughput of the ﬁnal decision increased up to


INTRODUCTION
Vision-based inspection of industrial products offers lowcost, high-speed, and high-quality detection of defects. The analysis of a digital image must be completed in a tight time frame so that the production system can act based on the measures. Regardless of the high accuracy of some texture analysis methods, they have high computational cost in preprocessing, feature extraction, and finding a matching category [1,2]. Some most challenging industrial inspection problems deal with textured materials, such as fabric defect detection for the apparel industry [3]. Many researchers have tried to improve the speed of texture-based analysis with consideration of the accuracy. For example, Akoushideh et al. [4,5] proposed a hardware-based algorithm on the FPGA platform to improve the throughput of texture classification. Chen et al. [6] proposed a real-time crack detection in inspection videos using Deep Fully Convolutional Neural and Parametric Data Fusion. Wang et al. [7] proposed a sequential detection method of image defects for This is an open access article under the terms of the Creative Commons Attribution 4.0 License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2020 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology patterned fabrics. Also, Zhang et al. [8] presented a fabric defect detection method using the saliency analysis of multi-scale local steering kernel.
Classifier combination is a trendy area of research [9]. The primary driving force behind this approach is the correction of classification error rather than a single classifier. Many different architectures for classifier combinations have been proposed, which one of them is a parallel combination architecture. Here some classifiers individually decide on the category of an unknown input pattern. All these decisions are fed into a combiner. The combination rules are employed by the combiner to make the best decision on the category [10]. Another architecture that can be utilised for classifier combination is a serial concatenation. Here classification is achieved by routing an unknown pattern through a chain of classifiers, where each classifier reduces the number of feasible categories [11]. Generally, in the combined classifier approach, the complexity and computational cost are more than the single classifier approach. Of course, achieving more accuracy is the main aim of the multistage classifiers [12].
Our contribution here is close to a serial combined classifier. We propose the features' value range (FR) approach as a pre-classifier to improve the throughput of the main classifier. Unlike a combined-classifier, which their main objects are accuracy, our proposed approach is improving the throughput of classification. Regarding the importance of speed in new processing methods, we emphasis on the functionality of the proposed approach in the industry. Regarding the simple construction of the FR approach (as a pre-classifier), the number of feasible categories in the main classifier will be reduced. We show that the cost of the FR approach accompanies with the time cost of finding a match for attainable categories is less than the cost of finding a match for all categories in many classification approaches. In other words, a classifier for the m classes takes less time to execute than a classifier for the N classes where m < N, and m is the subset of categories identified by the preclassifier step. The FR is an approach, and non-deep new methods can be used in the first classifier (pre-classifier), and all stateof-the-art methods can be used for the second (main) classifier.
We believe that the convolutional neural networks (CNNs) have revolutionised computer vision and make significant performance gains in many vision applications. Also, deep texture descriptors make more accuracy in overall. Liu et al. [13], in their extensive evaluation of the texture feature extractors, presented that the deep texture methods, make 4.86% better accuracy on average. However, the experimental results depicted enormous complexity, time cost (8.73 times slower in average), and featurelength (10.11 times longer in average) of the CNN approach with some critical problems such as not resistance to noise and scale against non-deep methods. Finally, they concluded efficient operators such as local binary pattern (LBP) with deep texture descriptors.
Our proposed approach may be efficient with deep classifiers because the FR approach can introduce accurate and fewer candidate categories to a deep classifier. Using useful operators such as LBP in our proposed approach can overcome some challenges such as scale and illumination. Designing a simple deep network with an effective pre-classifier such as our proposed approach may be the right solution for problems such as mobile computing. Also, implementation of the FR approach as a pre-processor for transfer learning methods may be possible. For predicting, just some previous classes change the last layer. However, the weights will be constant.
The proposed approach is implemented under different configurations on some well-known data sets such as Outex [14], Scene-13 [15], and UIUC [16]. Some criteria for evaluation of the FR approach under different configurations are defined. Further, we evaluate the noise sensitivity of the FR approach. Also, we propose to use the most significant features approach [5] and hardware implementation to improve the efficiency of the FR. This research is an updated and extended version of our conference published paper [17]. The rest of this paper is organised as follows. Section 2 describes the necessity of the proposed approach with presenting the large-scale data sets and time cost of classifiers. Section 3 presents our proposed approach. Experimental results are given in Section 4. Discussions on the experimental results are in Section 5. Section 6 concludes the paper.

A REVIEW OF THE DATA SETS SIZES AND TIME COST OF CLASSIFIERS
In this section, regardless of modern and high-speed platforms to implement image processing algorithms, we depict classification huge time with related hardware. Another challenge in image classification for humans and computers are on the large scale of the semantic space. In the following, we will introduce some data sets, which are not necessarily texture. However, when the textural features use in categorizing, the proposed approach will be useful. Besides, the considerable calculation cost in this section depicts the importance of our proposed idea. In the other sections, which we will discuss the implementation possibility of the FR approach, the time costs of methods mentioned in this section will be referred for debating.

Data sets with big categories
The ImageNet10K with 10,184 categories and 14,197,122 images [18], 80MTI [19] with 79,302,017 colour images (32×32), ILSVRC2010 with a million training images in 1000 categories, verification, and test set containing 50,000 and 150,000 photos, respectively, are some examples of data sets with big categories. Moreover, other data sets with huge categories mentioned in this research are the 3D Palmprint Database [20] consisting of 8000 samples, ALOI-ill, and ALOI-View data sets [21] which, are consists of 24,000 and 72,000 images, respectively.

Computation time
In this section, we depict the feature selection, feature extraction, and classification computation time of image processing methods on different platforms. The complexity of new image processing methods with a huge dimension on big data set categories is the main reason to implement our proposed approach. In other words, the computation time cost of the mentioned methods in this subsection motivated us to implement the FR approach. Feature selection is an essential step in classifiers that, in some algorithms such as mRMR, Max-Relevance, and Max-Dependency, takes a long time. For example, For NCI data, MaxDep takes about 20 and 60 s to select the 20th, and 40th features, respectively. The mRMR takes about 2 s to select any features. For the LYM data, MaxDep needs more than 200 s to find the 50th feature, and the mRMR uses 5 s [22].
Feature extraction is one of the classification steps. In the following, we depict the feature extraction and classification costs of some state-of-the-art algorithms. Bandara et al. [2] presented that the feature extraction cost of different methods on the ALOI data set takes 10-2590 ms with Intel i7 2600 3.4 GHz, 8 GB of RAM. This process takes 15-3820 ms with  [23] depicted that the time cost for identifying on the 3D Palmprint database is from 40 to 76,816 ms. Also, Wangmeng et al. [24], in their experiments on ORL and FERET data sets, showed that training cost time for different methods takes 46-254.2 s, while testing costs are from 3.9 to 36.2 s.
Note that the time complexity of the state-of-the-art classifiers based on CNN is enormous. For example, on the Fashion-MNIST data set, based on the accuracy of the algorithms, it is 2 million [25], 3.8 million [26], and 54.1 million [27]. Different sliding pane sizes for 96 × 96 to 12 × 12 affect training time from 19 to 97,026 s. Also, on the ALOI data set, the CNN-based classification method (Ubuntu) with feature-length 253,440, takes 1362 ms.

PROPOSED APPROACH
In this section, we first describe the calculation method of the feature's value range. After that, the implementation method of the FR as a pre-classifier is presented. Besides, for evaluation of the proposed approach in different configurations, we define some criteria such as Reliability (Rb), category Selection Throughput (STp), and Classification Throughput (CTp).

Feature's value range (FR) calculation method
In this approach, an image is first divided into the B nonoverlapped blocks. For example, B = 16 means 4 × 4 nonoverlapped with the same block size (Figure 1(a)). After that, feature vectors (FVs) are extracted from each local image block (Figure 1(b)) by using a feature extractor. After extraction of feature vector from local areas, the variance of the values of the features on all blocks is calculated (Figure 1(c)). As it can be seen in Figure 1(d), the average plus and minus two standard deviations of feature vector components, μ ± (2 × σ), makes features' value range vector (FRV).
We consider the feature extraction operators (FEOs) and the number of local blocks (B) as configuration parameters of the FR approach. To present the efficiency of our proposed method, we consider some popular, not necessarily new, texture descriptors to demonstrate the efficiency of this idea. In this research, we serve the Haralick features (HF) [28], LBP operators [29] with its some extensions, and Circular Gabor Filter (CGF) [30] as FEOs that are well known in texture analysis.
The B values are considered 4, 16, and 64 for experimental results.

The FR as a pre-classifier
During the pre-classification procedure, the average of feature vectors (AFV), obtained from all blocks of the input image, is compared to the feature range vector of all categories. The input image is assigned to a class if its AFV is in the range of that category's feature range (CFR). In the following, we describe how to calculate the AFV and CFR. Also, we present the implementation method of the FR approach as a pre-classifier.

Calculation methods of CFR and AFV
First, train images per category are divided into non-overlapped blocks. After that, feature vectors are extracted from all local blocks of all training images. The CFR is obtained by calculation of variations of each element of feature vectors, as demonstrated by (1)- (4). In this experiment, we select two standard deviations from the mean as a criterion of variation. For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set, while two standard deviations from the mean account for 95.45%, and three standard deviations account for 99.73%. The feature vector of an image shown by Equation (1): where N is the feature vector dimension and the F n is the nth element of the feature vector. Feature matrix of an image with B local blocks shown by Equation (2): where n is an element of the feature vector b is the bth local block. Moreover, N is the dimension of the feature vector, and B is the number of local blocks. The C F (T ×B)×N consider the F B×N matrix of all training number(T) shown by (3).
where T is the number of training samples per category, finally, the CFR is obtained by Equation (4): where F n max is calculated by the average plus two variances of the nth column values of the CF matrix ( + 2 ). Similarly, F n min is calculated by the average minus two variances of the nth column values of the CF matrix ( − 2 ). For calculating of the AFV, feature vectors of all local blocks (F B×N ) of the input test image, shown by Equation (2), are obtained. After that, an average of the vectors is calculated. Equation (5) illustrated the AFV formula: where F b,n means the nth feature vector element in the bth block.

Implementation of the FR approach as a pre-classifier
For implementing the FR as a pre-classifier, the AFV of the input image is calculated and compared with the categories' CFR margins. The image is assigned to a category if the image's AFV is in the range of that category's FR. For a primary evaluation, three testing images from three categories are randomly selected from the Outex, Scene-13, and UIUC data sets. Table 1 shows the implementation of the FR approach with configuration parameters FEO = HF and B = 4. It can be seen that due to the overlap of features' value range of some image categories, more than one category is assigned to the input images. Therefore, we propose the FR approach to use as a pre-classifier. The number of selected categories is reduced from N to m using the FR approach. The N is the number of categories for the data set. Figure 3 manifests the block diagram of the FR approach. It can be observed that the proposed approach is similar to a filter  There is a wrong assignment in the FR that we will discuss it and other aspects in details of the FR approach in the next sections.

The definition of some criteria for evaluating of the FR approach
In the following, we define some essential criteria for evaluating the FR approach. The numerical values of the defined criteria, obtained from the experimental results, are significant for selecting the best configuration. We suppose N is the number of categories in a data set, and m denotes the number of categories selected by the FR approach.

Selection Throughput (STp)
We define the category selection throughput criterion with the division of N to m (i.e. N m ). The STp value can be between 1 (with m = N) to N (with m = 1). STp = 1 means the AFV is in the range of all category's FRs, and the FR approach has not filtered any category. Therefore, all the N categories are proposed to the second classifier as a candidate. Further, Tp = N depicts that only one class is in the output of the FR pre-classifier, and there is no need to make any comparison in the main classifier.
Note that for a set of test categories, this criterion can be obtained from the average of STp value for all categories, as shown by Equation (6): where the NC denotes the number of categories, Table 1 depicts STP values and average STp for some categories of three data sets that have been used for benchmarking of the proposed approach.

Reliability (Rb)
The importance of the STp criterion is involved with the existence of the right category among the proposed candidates that we name it as reliability (Rb) of the FR approach. In other words, if the FR approach cannot select the right category, accompanied by the m-1 selected categories, the FR approach is inefficient. In Table 1, depicts Rb value per category and overall Rb for a few categories of three data sets.

Classification throughput (CTp)
Complexity and computation time of the FR (t FR ) play an essential role to use or not to use the FR approach. Because the proposed idea will lose its effectiveness when the t FR becomes greater than t C (N − m). The t C (k) is the comparison time of the input image feature vector. Moreover, k defines the number of category representative vectors (CRV). Therefore, the condition t FR < t C (N − m) is critical for using the FR approach. Considering the critical condition, we propose a classification throughput (CTp) criterion, as shown by (7), for performance evaluating of the FR approach. Note that for a set of test samples, an average of m (m) is placed in (7):

EXPERIMENTAL RESULTS
In this section, we first describe the benchmark data sets and configuration of training (and testing) samples per category.

UIUC data set
The data set includes surfaces whose textures are due mainly to albedo variations, 3D shape, as well as a mixture of both. Significant viewpoint changes and scale differences are present within each category, and illumination conditions are uncontrolled. This data set consists of 1,000 no calibrated and unregistered images with 40 samples per each of 25 different textures. All images are in the greyscale JPG format, 640 × 480 pixels.

The STp and Rb values in different configurations
In this subsection, we discuss on the STp values and Rb under the FEO = {HF, LBP, CGF}, and B = {4, 16, 64} on the Outex, Scene-13, and UIUC data sets. As we said before, to show the efficiency of our proposed approach, we considered just some popular, not necessarily new, texture descriptors such as Haralick texture feature, Local binary pattern approach, and circular Gabor filter to present the efficiency of the FR approach. Also, state-of-the-art classifiers can be placed in the second classifier, which does not discuss in this article. Table 2 illustrates a comparison of the Rb and STp under FEO = LBP, HF, and CGF on the Outex, Scene-13, and UIUC data sets. We can see that more blocks (or smaller size of the local area) cause less category selection throughput (STp). For instance, the STp value of the FR approach on the Outex-tl84 data set is 13.34× (m ≃ 3) under FEO = HF and B = 4. In comparison, this value decreases to 6.64× (m ≃ 6) for B = 64. Besides, the results show that more blocks (B) make more reliability (Rb). For example, the reliability of the FR approach on the Scene-13 data set is 71.60% under FEO = HF and B = 4. In contrast, this value for B = 64 raises to 88.00%. It can be observed that implementation of the FR approach under FEO = LBP causes more reliable (Rb), while FR under FEO = HF makes more STp value. The results show that the trade-off between the STp value and reliability (Rb) can be set using the local area (B). For more details, we compare two criteria STp and Rb together in the Appendix, Figure A1.

Noise sensitivity evaluation of the FR approach
To evaluate the noise sensitivity of the FR approach, we add the Gaussian noise to the testing samples. Figure 4 manifests the numerical value of the Rb (%) with configuration parameters FEO = LBP and B = 64 in the presence of different levels of noise (SNR = 100, 30, 15, 10, and 5) on the Outex and UIUC data sets. We calculate the dropping of Rb value in the presence of noise. It can be seen that the dropping rate of reliability is much more than the noise to signal ratio. In other words, the FR approach is not resistant to noise.

DISCUSSION
Considering the experimental results, in this section, we discuss some events such as the Null-selection and Single-category that are based on the number of selected categories (m). Also, we show the effect of the local area size on the Rb and STp. Further, we calculate the computational time of the FR approach under different configurations on the Outex data set. The implementation possibility of the FR is discussed with three hypothetical classifiers under different configurations. Further, we propose two solutions for improving the FR approach efficiency.

Null-selection
In the experimental results, we observed that the value of the m for some testing images is zero. In other words, the AFV of the input image is not in the range of any category's feature ranges (CFRs). We name this event as 'Null-selection' (NS). The FR algorithm will report all categories as a candidate (m = N ) when it falls in the NS situation. Therefore, the main classifier has to consider all category representatives for comparison with the feature vector of the input image. Table 3 manifests the number of NS events under different configurations on the benchmark data sets. Considering the number of testing images (TI#), the percentage of the NS event is shown in parentheses.
As we can see, the number of NS events is reduced with an increase in the B (or small size of the local area). Also, the FR

Single-category
'Single-category' (SC ) event (or m = 1) shows the power of the FR approach in improving the main classifier throughput. More SC events among the testing samples raise the value of the classification throughput (CTp); because the second classifier does not need to do any computation. However, the reliability of the SC (RSC ), as shown by (8)

5.3
The computation time of the FR approach

FIGURE 7
The relation between feature dimension and feature extraction and classification time cost [2,23,24] configurations. Due to the more complex construction of the Haralick features, implementing of the FR approach using the LBP operator takes much less time than HF. Note the FR approach was performed in the MATLAB environment and run 100 times on an Intel(R) Core(TM) i7-2630QM CPU @ 2.00 GHz with 4.00 GB RAM to obtain average computation time.
Implementation possibility of our proposed approach depends on less calculation cost of the matching category than the computation time of the FR approach (t FR ) in a data set. Figure 7 extracted from Section 2 [2,23,24], depicts a wide variety of computation time for classification methods. As we can see, more feature dimension makes more computation cost in feature extraction and classification cost. Regarding the number of categories in data sets, the classification time of different methods obtained about 200 s, 500 s, 2.5 ms, 25 ms, 100 ms, and 385 ms. Regardless of the unfair speed comparing these methods together, calculated classification time helps us to investigate about implementation possibility and efficiency of the FR approach.
In the following, we will discuss the applicability of the FR approach on the Outex and UIUC data sets with three hypothetical classifiers (hCl#1, hCl#2, and hCl#3) with classification consumption cost (t C ) 500¯s, 5 ms, and 25 ms, respectively. To simplify, we suppose classification consumption cost for k categories takes k times more than one category (t C (k) = k × t C (1)) for all three hypothetical classifiers. Of course, the linear relation between t C (k) and t C (1) is not true for non-linear classifiers such as artificial neural networks, and it depends on the network architecture, the dimensionality of the feature vector, the number of layers, and other network properties.  First, we implement the FR approach on the Outex and UIUC data sets under different configurations. The training and testing samples details have been mentioned in reference [17]. In this step, we calculate the values of t FR andm. Figure 8 manifests the value them under different configurations on the Outex and UIUC data sets. In the Figure 8, N presents the number of categories in a data set.
After calculating them, t Ci (N − m) for three classifiers, i ∈ -1, 2, 3˝, can be calculated and compared with the t FR under different configurations. Table 4 presents that the implementation of the FR approach is not justifiable for all hypothetical classifiers concerning the condition t FR < t C (N − m). Considering this critical condition depicts only a few configurations, that their t C values are more than t FR can be used. The values of t C (N − m) that pass the condition are bolded and highlighted. For example, the FR approach under all predefined configuration parameters cannot be used with hCl#1 and hCl#2 on the UIUC. Also, the FR approach under FEO = LBP and all values of B is justifiable for hCl#3 on the Outex {Tl84 and Horizon} and UIUC. Table 5 presents the classification throughput (CTp) of the FR approach with hypothetical classifiers for possible configurations on the Outex and UIUC data sets. Accompanied by CTp, the Rb criterion should be considered for selecting the best configuration. The Rb values (%) for each configuration are shown in parentheses. For example, configuration parameters FEO = LBP and B = 4 cause the maximum value of the CTp (+86.5%) for implementation of the FR approach with hCl#3 on the Outex-Horizon data set. However, these configuration parameters have the least reliability, among other values of B. The experimental results depict that the FR approach works safely with the classifier that their computation time is more than 25 ms.

Implementation of the FR approach on some classifiers
In this subsection, we discuss the applicability of the FR approach with real classifiers. Table 6 depicts the implementation results of FR under FEO = {LBP, HF, CGF} with MPR_LBP, MPBR_LBP [32], and MPLBP riu (MPLBP) [33] operators on the Outex-tl84 data set. The magnitude of the image under the circular Gabor filter in three scales and four rotations has been considered as a feature vector with dimension 12. The Czekannowski (Cze) [34] and Chi-2 [35] distance have been considered as similarity measures for the main classifier (second classifier). The accuracy of the MPR_LBP and MPBR_LBP operators in a single classification approach with the Cze distance are 97.88% and 97.46%. Besides, the accuracy of the MPLBP approach with the Chi-2 distance is 88.96%. Experimental results show that, in some configuration, the FR not only does not reduce the accuracy but also improves the accuracy result [17]. The STp value is in parentheses.

Improving the efficiency of the FR approach
Effective feature approach and implementation of the FR algorithm in parallel to the main classifier on reconfigurable devices are our two solutions for increasing the CTp value. Therefore, the efficiency of the FR approach is improved, and it can be used with more classifiers. In the following, we describe them in detail.

Effective features approach
The computational complexity of the feature extraction operators can be decreased using the effective features (EF) approach [5]. The priority number of Haralick features for the Outex-tl84 and UIUC data sets are shown in Table 7. It can be observed that the priority number of features is almost constant under different area sizes and data sets. The extraction of the EF approach on the Outex-Inca, Outex-Horizon, and Scene-13 data sets is in the Appendix, Table A1. Effective feature vectors are made by combining of features based on their priorities. For instance, the first effective vector (EF#1) is obtained by considering the most prior feature (i.e. the seventh Haralick features). In addition, the second EF vector is EF#2 = 〈F 7 F 4 〉. Further, EF#13 is made by all feature (〈 F 7 F 4 F 2 F 10 F 6 F 9 F 8 F 11 F 3 F 13 F 12 F 5 F 1 〉). In this approach, the EF vectors with lower numbers have lower dimensions, and the calculation time of the FR approach using them will be decreased. Therefore, the t FR value is reduced using this approach. In the following, we use the EF approach in the implementation of FR under FEO = HF and B = {4, 16, 64}. We have selected the Haralick operator for its more complex construction than the LBP. In the experiments, the Outex-tl84 and UIUC data sets are considered. Figure 9(a) plots the t FR value (in millisecond) with effective vectors EF#1 to EF#13. The results are depicted that effective vectors with lower dimensions cause the lower t FR value. From Figure 9(b), contrary to the positive effect of the EF approach to reduce the value of t FR , it hurts the number of selected categories (m). The FR approach selects more categories in low dimensional effective vectors and the t C (m) value is increased. Hence, the improvement of the CTp depends on decreasing the value of t C (m) + t FR . In other words, there is a trade-off between the cost of FR (t FR ) and the m. However, less value of EF makes more robustness of FR approach against noise. Figure 10 depicts the relation between Rb and STp. The experiments have been done on the Scene-13 data set with FEO = LBP. It can be seen that the reliability drops slightly during the different effective feature vectors (EFs). However, the STp value improves along with more efficient features. The experiments on the UIUC data set with FEO=LBP have been depicted in Appendix, Figure A3. Figure 11 shows the inverse relation of the Rb and STp in different EFs vectors. As we can see, for 90% reliability, we will have two choices to select B (16 and 64) that B = 16 attains more STp value. Figure 12 depicts the CTp and Rb values under the EF approach on the Outex data set. Configuration parameters are FEO = HF and B = {4, 16, 64} in the experiment. It can be observed that using the EF approach increases the reliability criterion (Rb). Figure 12 manifests the effect of the EF approach on the Rb and m in the presence of different levels of noise. The FR approach is implemented under configuration parameters FEO = LBP and B = 64 on the Outex-tl84. We can see that the EF approach has a positive effect on the robustness of the Rb and m against noise. Effect of the EF approach on the Rb and m in the presence of different levels of noise on the UIUC data set presented in the Appendix, Figure A4.

Hardware implementation of the FR approach
Running the FR algorithm, simultaneously with some procedures of the main classifier, such as pre-processing and feature extraction, causes a reduction in computation time overhead. Therefore, the proposed approach can be justified to use with more classifiers. We suppose that the t PF is the computation time of the pre-processing and feature extraction algorithms. If t PF > t FR , the computation time overhead of the FR approach is zero. However, if t PF < t FR , the computation time overhead is obtained by t FR − t PF . Figure 13 depicts the simple block diagram of the proposed solution for decreasing the t FR that can be implemented on reconfigurable devices -(FPGAs) in a parallel approach. Note that only the FR approach can be run on hardware, and the selected categories are sent to the second classifier for software-based computation. The hardware-based classification throughput (hCTp) criterion for evaluating the performance of the FR approach is calculated by (9). The reduction of computation time overhead is presented by r (t FR − t PF ) mentioned in Equation (10):

FIGURE 14
The architecture of Haralick texture feature extraction on FPGA We implement the architecture of some LBP extensions on the Spartan and Virtex series of Xilinx FPGA with image size 128 × 128 and 8-bit grey level to extend the functionality of the FR approach. Table 8 depicts the clock period and maximum frequency with more hardware details of some LBP extensions with the calculation cost of the FR. The calculation cost of the implementation of the FR approach on the FPGA platform with different operators depicted that the computation cost of 'Feature selection', 'Feature extraction', and 'Classification' that is mentioned in Section 2, is significantly much more than the hardware implementation of the FR approach regardless of the type of machines. Therefore, we can conclude that hardware implementation is genuinely possible. The architecture of Haralick texture feature extraction on FPGA is taken from references [4,5] depicted in Figure 14.

CONCLUSIONS
We have proposed the feature's value range (FR) approach for improving the throughput of texture classifiers. Regarding data sets with huge categories and computational cost classifiers, the FR approach can be a solution to improve the throughput of the image classification. The size of the local area (B) was a tradeoff between throughput improvement and reliability. More local area block makes: (a) more reliability (Rb) (more accuracy), Experimental results depict that the LBP operator makes better overall performance (more applicable and reliable) related to other used operators in the FR approach. Also, the results depicted that a combination of FR with some texture classifiers may not only improve the throughput of the main classifier but also, in some cases, improve the accuracy of the classification process. Using the effective features (EF) approach and hardware implementation of FR, we will have more opportunities to use this idea with fast texture classification methods. Further, the experiments showed the FR is not resisted in the presence of noise.
For future work, the efficiency of other state-of-the-art textural feature extraction methods on the FR approach can be compared with the implemented operators. Also, applying new state-of-the-art methods in the second (main) classifier can be considered. Hardware implementation of the FR can be another research for real-time pattern analysis.

DATA AVAILABILITY STATEMENT
We have uploaded all text, figures, tables, and materials in individual files and these are available for publisher.

CONFLICT OF INTEREST
There is no any conflict of interest for this paper.

FUNDER INFORMATION
There is no any funding for this research paper.  Figure A2 manifests the percentage of the SC events (regarding the number of testing images) that happened under different configurations on the Scene13 and UIUC data sets. The number of testing images (TI#) for Scene-13 and UIUC is 750.