Fast depth intra coding based on texture feature and spatio-temporal correlation in 3D-HEVC

To alleviate the computation burden of the depth intra coding in 3D-HEVC, a complexity reduction scheme based on texture feature and spatio-temporal correlation is proposed. Firstly, a maximum splitting depth layer decision algorithm is proposed to reduce unnecessary splitting depth layer of the coding tree unit utilising the information of the previous encoded I frame in the same view. Secondly, a new texture complexity model is built by pixel-based statistical method combined with edge detection. Based on the proposed model, the coding unit block is divided into the smooth block, texture or edge block. On the coding unit level, an early termination of coding unit splitting algorithm for smooth blocks is proposed to ﬁlter out unnecessary coding blocks. Thirdly, on the predicting unit level, a fast candidate mode decision algorithm considering predicting unit’s types and spatial correlation is proposed to decide the candidate mode list directly. Experimental results describe that the proposed algorithm reduces 53.8% depth intra coding time on average, with 0.43% BD-rate loss on synthesised views.


INTRODUCTION
With the rapid development of multimedia technology, simple two-dimensional video is not satisfying people's visual perception of the real world. Thus, three-dimensional video (3DV) with the sense of interaction and immersion starts to emerge. Compared with two-dimensional video, 3DV adds depth perception on the basis of planar vision, forms stereo vision, and brings an immersive feeling to the audience. At the same time, 3DV also provides the choice of arbitrary views, and viewers can choose different perspectives to watch depending on their own preferences. The collection of realworld information has changed from passive acceptance to active acquisition. 3DV format is usually presented by the multi-view video plus depth (MVD) [1,2], which consists of the limited number of texture maps and corresponding depth maps. Hence, the amount of 3DV data is much larger than two-dimensional video, which brings great pressure to the transmission of video data. In order to meet the existing storage capacity and network transmission conditions, how to effectively compress 3DV is a hot issue of current research.  [ 12], Fu [13], Saldanha [14], Chen [15] II Shen [16], Hamout [17], Zhang [18], Lei [19], Guo [20] Zhao [21], Sanchez [22]- [23], Park [24], Zhang [25] III Li [26], Peng [27], Jing [28] is listed in Table 1. The type I is to reduce the number of unnecessary traversal coding units (CUs) or terminate CU partitioning early, the type II is to reduce the prediction modes or optimise DMMs, and the type III is a comprehensive algorithm including the first two types. For the type I, Chen et al. [12] proposed an early termination of CU splitting when the rate-distortion cost (RDcost) of the current block with depth intra skip (DIS) mode is less than or equal to four times the RDcost of the first subblock with DIS mode. Fu et al. [13] proposed to terminate CU splitting early if DIS-parent CU is detected at the higher depth levels. Saldanha et al. [14] utilised static CU splitting decisions trees based on data mining and machine learning to achieve early termination of CU splitting. In our previous work [15], we proposed an adaptive QP convolutional neural network (AQ-CNN) structure to terminate CU partition early. For the type II, Shen et al. [16] utilised intra coding information from spatiotemporal, inter-component, and inter-view neighbouring CUs to reduce the prediction modes. Hamout et al. [17] used tensor feature extraction and big data analysis to speed up the prediction mode decision process. Zhang et al. [18] reduced the prediction modes using the spatial distribution characteristics of the reference pixels. Lei et al. [19] proposed a simplified search algorithm for DMM1. Guo et al. [20] proposed a fast depth intra coding scheme using gray-level co-occurrence matrix (GLCM) to classify the CUs into different types, and avoided checking some unnecessary modes for certain CUs. Zhao et al. [21] optimised DMMs based on the depth of the scene represented by the depth value. Sanchez et al. [22] proposed a fast intra modes scheme by reducing candidate modes and optimise DMM1 [23]. Park et al. [24] used edge classification in the Hadamard transform domain to optimise DMM1. In [25], the optimal prediction mode with segment-wise depth coding in rough mode decision is decided when its RDcost was smaller than the certain threshold and the RDcost of the wedgelet pattern of DMM1 is calculated by the squared Euclidean distance of variances. For the type III, in our previous work [26], the frequency distribution characteristics of the RDcost of the CTU level in depth layer 0 is used to predict the maximum depth layer of CTU, the candidate mode list is decided early based on the frequency distribution characteristics of the RDcost of the optimal rough mode in rough mode decision and the spatial correlation, and a fast wedgelet pattern decision method based on K-Means is proposed to optimise DMM1. Peng et al. [27] set a certain threshold of the minimum RDcost to reduce candidate modes, and using the properties of the current CU to skip smaller CU size. Jing et al. [28] proposed a classification and regression tree to predict the depth layer and reduce intra prediction modes.
To reduce the computational complexity of the depth intra coding, we propose a fast depth intra coding scheme based on texture feature and spatiotemporal correlation. This paper is an extended version of our previous conference paper [29], in which early termination of CU splitting (ETCS) and fast candidate mode decision (FCMD) algorithms are presented. In this paper, a maximum splitting depth layer decision algorithm (MSDLD) is further developed to reduce the number of unnecessary traversal CUs. At the same time, a detailed description for ETCS and FCMD algorithms is presented, and a richer comparison of experimental results is given. The main contributions are listed as follows: Due to the strong temporal correlation between two consecutive I frames in the same view, the MSDLD algorithm is proposed to cut down the number of unnecessary traversal CUs using the maximum splitting depth layer and the RDcost of the co-located CTU in the previous encoded I frame.
A new texture complexity model is built by pixel-based statistical analysis combined with edge detection. Based on the texture complexity model, the CU block is divided into the smooth block, texture or edge block. Depth texture characteristics and spatial correlation are combined to early determine candidate modes and terminate unnecessary CU splitting.
The remainder of this paper is organised as follows: Section 2 shows Background and motivation. Section 3 describes details of the proposed algorithm. Experimental comparisons are given in Section 4. Finally, Section 5 presents the conclusions.

BACKGROUND AND MOTIVATION
The CTU quad-tree partition structure in depth intra coding has the same as that in HEVC [30]. Each CTU covers a square area, which can be split into multiple sub-CUs. As shown in Figure 1, the rate-distortion cost (RDcost) calculation order of CU in CTU is given, where the largest CU size is 64×64 and depth layer is equal to 0, and it can be divided into four 32×32 sub-CUs with depth layer 1. When the smallest CU with 8×8 is reached, the partition process is stopped. To obtain the optimal quad-tree partition structure, each CTU needs to calculate the RDcost of various CU sizes under the maximum splitting depth layer 3, about the total of 85 CUs (1 CU with 64×64, 4 CUs with 32×32, 16 CUs with 16×16 and 64 CUs with 8×8). Each CU also needs to calculate combinations of various PU sizes and prediction modes, resulting in high coding complexity. In order to study the maximum depth layer (D max ) of the CTU in depth intra coding, the D max of the five test sequences is counted under the experimental conditions in Table 4, and the experimental results are shown in Figure 2. It can be seen that, on average, 63.03% CTU's D max choose 0, 11.36% CTU's D max is equal to 1, and only about 25% CTU's D max choose 2 and 3. Since the depth map is composed of large flat area and small portion of sharp edges, flat areas are more suitable for selecting larger CU size, and sharp edge areas usually choose smaller CU size. Therefore, if the D max of the CTU can be determined early, the traversal of unnecessary depth layers will be reasonably terminated, and it will avoid traversing more unnecessary CUs.  In HEVC, intra prediction uses the reconstructed pixel values of neighbouring PUs to predict current PU. The selection of predict modes are the key issues to be solved in intra prediction. HEVC uses larger and more block sizes to adapt to the content characteristics of high-definition video, and supports more types of intra modes to accommodate richer textures. Intra prediction in HEVC supports five PU sizes: 4×4, 8×8, 16×16, 32×32, 64×64, each of which corresponds to 35 prediction modes, including Planar mode, DC Mode and 33 angle modes, as shown in Figure 3 left. In summary, as shown in Figure 3 right, the PU and the CU have the same size for intra prediction. Only when the size of the CU is 8×8, current CU is divided into four sub-PUs. To cut down the computational complexity of the encoder, a fast two-stage intra prediction algorithm is adopted in HEVC. In the first stage, rough mode decision process is conducted, that is, the RDcost function based on sum of absolute transformed difference is used to screen out the N most likely candidate modes from 35 modes (for a PU size with 8×8 and 4×4, eight best modes are selected; for PU sizes with 64×64, 32×32 and 16×16, three best modes are selected). Due to the spatial correlation of neighbouring PUs, the two most probable modes of the spatial neighbouring PUs are added to the candidate mode list. In the second stage, the RDcost of all modes in the candidate mode list will be calculated to select the best prediction mode with the minimum RDcost.
As shown in Figure 4, based on above intra coding in HEVC, 3D-HEVC depth map intra prediction is summarised as the following five steps. The first step is the traditional HEVC intramode selection process, including rough mode decision and most probable mode process. In the second step, the DMM1 and DMM4 (DMMs) are added to the candidate mode list.

FIGURE 4 Depth intra mode decision process in 3D-HEVC
In the third step, all candidates in the candidate mode list are encoded utilising traditional residual coding to calculate their RDcost. As seen, for example, in (4), the view synthesis optimization technique is used to calculate the RDcost (J VSO ) of each candidates. In conventional video coding system, one commonly used distortion function is the sum of squared differences (SSD), which is defined between original and encoded depth block as where s D (x, y) ands D (x, y) indicate the original and reconstructed depth map, respectively, and (x, y) means the sample position in PU. However, the conventional SSD metric is not an good estimate of the synthesised view distortion. Instead, the following view synthesis distortion metric provides an better estimate by that weighting the depth distortion D depth with the sum of absolute horizontal texture gradients, which is defined by wheres T indicates the reconstructed texture, and is proportional coefficient. where w syn and w dep denotes the weights for the synthesised view distortion and the depth map distortion.
where J VSO is the RDcost based on view synthesis optimization, is the Lagrangian multiplier. The fourth step is to use a fast intra segment-wise depth coding (SDC) technique for all candidates in the candidate mode list. The fifth step is to determine the optimal prediction mode and whether it uses the traditional residual coding or intra SDC.
The time distribution of 3D-HEVC depth intra coding is given in Figure 5. In the total intra encoding time, the encoding time consumption of texture video only accounts for 14%, and the encoding time of depth video accounts for 86%. In depth intra mode decision process, the largest encoding time is step 2 to step 4, which account for 25.18%, 27.50% and 23.39%, respectively. As described in depth intra prediction process above, about 7 to 12 candidate modes are added to the candidate mode list. By calculating the J VSO of each combination of candidate mode and transform coding method, the optimal combination is selected. Since most candidates in the candidate mode list are not selected, resulting in intolerable time costs in step 3 and step 4. If the candidate modes can be decided early, the encoding complexity will be greatly reduced.

THE PROPOSED FAST ALGORITHM
3D-HEVC achieves efficient encoding for MVD format videos, and the coding technique for depth video plays an important role in 3D-HEVC. The depth video consists of large areas with slowly changing pixels and sharp boundaries, which is mainly used to synthesise virtual views. The boundary information contained in depth map is used to represent the difference between foreground and background, which has a direct impact on the quality of synthesised views. In encoding process, border areas tend to choose more fine divisions, and smooth areas tend to choose larger CU sizes [31]. The CTU based on the quad-tree partition structure needs to traverse the depth layer 0-3 in turn. At each depth layer, a large number of the RDcost calculations are required to select the best prediction mode, which brings very large computational complexity. Through in-depth research on the correlation between two adjacent depth maps, the optimal D max of the current CTU is determined early. At the same time, by extracting the texture information of the current CU, the partition of the flat CU block can be terminated early, resulting in more encoding time reduction. In depth intra coding, the Z-Scan order is used to encode each CTU. Hence, when encoding the current CU, the optimal prediction mode of the above CU and the left CU has been determined. Due to the strong spatial correlation in depth map, the coding information of the neighbouring CUs can effectively guide the prediction mode selection of the current CU. There are two PU types in the depth map. One is composed of nearconstant or slowly changing depth values, while the other contains sharp boundaries. DC and Planar modes are very suitable for encoding the first PU type. If the texture information of each PU is combined with spatial correlation to more accurately predict the candidate mode set, a large number of redundant prediction modes will be reduced. The DMMs is mainly used to encode PUs with sharp edges. If each PU is pre-processed before DMMs process, by obtaining its edge information and combining spatial correlation, testing DMMs process can be skipped. In this section, we will introduce the proposed fast algorithm scheme in detail.

Maximum splitting depth layer decision
In depth video, for two consecutive I frames in the same view, there are only differences in the position of the object due to the motion of the object or the camera, but they have similar backgrounds. The texture features of the backgrounds or the same object tend to be smooth, and larger CU size are suitable for coding these regions. For these areas containing sharp edges, a smaller CU size can be chosen. Since the current CTU has similar texture feature as the co-located CTU in the previous encoded I frame, their coding information is also basically similar [32]. Consequently, it is feasible to explore encoding information of the co-located CTU in the previous encoded I frame to determinate the D max of the current CTU in the same view. Based on the above analysis, due to the strong correlation between co-located positions of two consecutive depth frames in the same view [33], the statistical experiments of the D max correlation in two consecutive I frames is investigated under the experimental conditions in Table 4. As is shown in Table 2, D max colCTU denotes the D max of the co-located CTU in the previous encoded depth map. The conditional probability P (D max curCTU |D max colCTU = 0) represents the D max distribution of the current CTU when the D max of the co-located CTU in the previous encoded depth map is equal to 0. P m indicates the percentage of the current CTU in the whole encoded CTUs when D max colCTU is equal to 0. From Table 2, P (D max curCTU ≤ 2|D max colCTU = 0) is up to 99.22% and P m accounts for 60.61% on average. This means that when D max colCTU is equal to 0, the probability that the D max of the current CTU does not exceed 2 reaches 99.22%. Therefore, D max curCTU is set to 2 when D max colCTU is equal to 0.
The D max and the prediction modes for the CTU are determined by minimising J VSO . Hence, J VSO is an important parameter to predict the D max . The event E 0 and the event E 1 is defined, for example, by (7) and (8), respectively, where J 0 curCTU and J 0 colCTU represents J VSO of the current CTU and the co-located CTU when the splitting depth layer is equal to 0, J 1 colCTU represents J VSO of the co-located CTU when the maximum depth layer is equal to 1. The percentage of the conditional probabilities P A and P B are exhibited in Table 3 under the conditions of the event E 0 and the event E 1 . As is demonstrated in Table 3, 97.37% of CTUs on average choose that the D max is equal to 0 when the condition E 0 is satisfied, the number of CTUs with D max = 1 account for 94.22% on average under the condition E 1 . P a and P b denotes the percentage of the current CTU when the condition E 0 and E 1 is satisfied in the whole encoded CTUs, respectively. It can be seen that P a appears from 17.90% to 47.94%, 33.26% on average, P b accounts for 2.74% on average. Based on the above analysis, the D max of the current CTU is defined by

Early termination of CU splitting
The selection of the CU size strongly related to the complexity of its texture. Chen et al. [34] proposed a fast CU size selection algorithm for HEVC intra coding using the angular second moment (ASM) of the gray-level co-occurrence matrix (GLCM) to measure image texture, which accelerates the encoding speed of texture video. Considering the difference between the texture features of the depth map and the texture map, the texture feature and edge information of the CU in depth map are extracted by using GLCM [35] and Sobel operator respectively. There are n grey levels for an image, GLCM computes in the way a pixel with intensity i occur horizontally, vertically or diagonally to a pixel with intensity j . An element value in the GLCM matrix is defined by accumulating how many times a pixel pair (i, j ) occurs, which represents a n × n dependence matrix square. The mathematic expression of the grey level co-occurrence matrix is defined as where pixel coordinates are (x, y) x, y = 0, 1, 2, … , n − 1, GLCM coordinates are (i, j ) i, j = 0, 1, 2, … , n − 1, and d represents step length. Hence, GLCM need to be normalised as where the second-order joint probabilityp d , (i, j ) ∈ [0, 1] of two pixels is separated by a distance of d along direction with grey level i and j (0≤i, j<n), respectively. Haralick et al. [36] defined 14 kinds of feature parameter values of GLCM to represent the texture characteristic. Ulaby et al. [37] found that only four characteristic values were not related, which could be easily calculated and achieve a higher classification accuracy of texture complexity. In order to more accurately describe the characteristic of CU, angular second moment (ASM), contrast (CON) and correlation (COR) are applied in this paper. ASM is the sum of the squares of each the element value of GLCM, which is used to measure the stability of the grayscale change of the image texture and reflect the uniformity of the grayscale distribution of the image. CON is used to measure intensity contrast between a pixel and its neighbour over the whole image. COR measures the similarity of the grey image value in the row or column direction. Formally, they are defined as Since the maximum grey level n of the depth map reaches 255, it is very time-consuming for computing GLCM. Therefore, to reduce the computational burden, the grey-level of the depth image is compressed to 16 levels and step length d is set to 2 in this paper. However, it would bring some distortion on the weak edges. The texture feature of the depth map can be represented by combining GLCM and Sobel. In Sobel operator [38], G x and G y represent the approximate horizontal and vertical gradients of each pixel. The approximate gradient value of each pixel is calculated by |G | = |G x | + |G y |. If |G | is greater than a threshold (it is set to 5 empirically), the pixel is considered as an edge point.
For the homogenous CUs, the pixels usually have similar values along four directions (0 • , 45 • , 90 • , 135 • ). GLCM feature vector (GFV) of CU or PU is defined by (19). As depicted in Figure 6, for smooth region, GFV is always equal to (0,1,0) for different ( =0 • , 45 • , 90 • , 135 • ). Hence, in order to correctly characterise the texture features of the depth map, a new texture complexity model is built by using GLCM and Sobel operator as is shown, for example, in (23). CU's texture feature is divided into two categories: smooth, edge or complex. The homogenous regions are suitable for coding with large CU size, and small CU size is used for coding the edge or complex regions. If the condition E 2 is satisfied (T is set to 5 empirically), current CU is identified as the flat block and CU splitting is early terminated.

Fast candidate mode decision
As described in Section 2, through the selection of the prediction modes in rough mode decision process, most probable modes process and DMMs process, the candidate mode list containing 3−12 candidate modes are finally generated. In the depth intra coding, Planar mode or DC mode is suitable for predicting large areas with smooth texture. DMMs are specifically used to encode sharp edges. Angular mode is used to predict some complex texture regions. Guo et al. [20] utilised the ASM of the GLCM to identify smooth CUs; DMMs are skipped for these smooth CUs. The directional features of the CU were measured by the COR of the GLCM to prune angular modes. However, Guo's [20] did not achieve a significant complexity reduction of the depth intra coding. The main reason is that the texture features of the depth map are not fully mined, which is composed of large smooth areas divided by very few sharp edges, but only a few of the edge regions have directional texture. In this paper, we are more concerned about identifying most of the smooth areas and how to quickly determine the optimal prediction mode of the PUs in the smooth areas. Due to the stronger spatial correlation of the depth map, the pixels of neighbouring CUs are very similar, and the same prediction mode can be applied to neighbouring regions. Neighbouring PUs are defined as left PU and above PU of the current PU in this paper. Combining the texture features of the PU with strong spatial correlation, the candidate mode list can be determined in advance, without requiring complicated the RDcost calculation. Based on the above analysis, a fast candidate mode decision (FCMD) algorithm for depth intra coding is explored. The condition E 4 is defined, for example, by (24) and the candidate mode list (CML) is determined, for example, by (25); the number of the mode candidates is reduced from 6 or 11 to only 3. Moreover, the most probable modes process is skipped.
-0, 1, 26˝, when E 4 is satisfied generated by traditional method, otherwise (25) Due to the DMMs is designed to represent sharp edges, it has no effect on the encoding of flat blocks. Depth map contains a very small part of the edge area, only edge or complex blocks need to evaluate DMMs. If one of neighbouring PUs has selected DMMs or the current PU contains edge points, DMMs are tested, and most unnecessary the RDcost evaluations for DMMs are avoided. Hence, the condition E 5 is defined by e.g. (26) (T is set to 0 empirically in E 3 ). E 5 = -E 3 || M le ftPU or M topPU ∈ (DMM 1, DMM4)˝ (26) Tests have been conducted to verify the reliability of the conditions E 4 and E 5 by following the common test conditions (CTC) [39]. Statistical data are extracted from PU mode decision process as shown in Figures 7 and 8. It can be found from  Figure 7, in the case of E 4 , P c represents the percentage of the counts that E 4 is satisfied in PU mode decision process, P c appears from 72% to 94%. P (E 4 ) means that the optimal prediction mode belonged to {0, 1, 26} when E 4 is satisfied, and it reaches more than 99%. As depicted in Figure 8, in case of E 5 , P (E 5 ) reaches about 82-94% that the optimal prediction mode belongs to DMMs when E 5 is satisfied, P d represents the percentage of that DMMs traversal counts are reduced in PU mode decision process, it appears from 28% to 72%. Taking all the analyses above into count, an efficient fast candidate mode decision algorithm can be developed by utilising the texture feature and spatial correlation.

The proposed overall algorithm
The flowchart of the proposed maximum splitting depth layer decision, early termination of CU splitting algorithm and the modified depth intra coding process are illustrated in Figure 9, whose placement within the overall scheme is indicated by gray shading.

EXPERIMENTAL RESULTS
In order to test the effectiveness of the proposed algorithm, eight test sequences with two resolutions (1024×768/1920×1088) are evaluated under common test  conditions [39]. Test conditions are shown in

Performance of the individual algorithm
As shown in Table 5, experiment results of the maximum splitting depth layer decision (MSDLD) algorithm based on encoding information of previously I frame compared with Fu's Restrict-PRO [13] is given. Fu et al. [13] proposed early termination of CU splitting method based on the consistency of depth intra skip mode between different depth layers. According to Table 5, on average, Fu's Restrict-PRO achieves a 31.0% encoding time reduction and a 0.07% BD-rate increase. The proposed MSDLD algorithm achieves comparable performance compared to Fu's Restrict-PRO, and 30.3% encoding time has been reduced, while bringing 0.10% increase in BDBR. As far as we know, this is the first method that the correlation between two consecutive I-frames is used to decide the D max for the subsequent I-frame in the recursive partitioning process. It is clear from this result, the MSDLD algorithm can predict the D max of the current CTU and skip unnecessary CU size efficiently. The experiment results of ETCS algorithm and FCMD algorithm compared with Sanchez's [22] are shown in Table 6. Sanchez et al. [22] utilised the information of neighbouring PUs and the border of current PU to speed up DMM1 and reduced the size of candidate mode list based on experimental statistical analysis. Both the proposed FCMD algorithm and Sanchez's [22] reduce the length of the candidate pattern list. On average, 37.4% the encoding time has been reduced, bringing 0.27% BD-rate loss. It can be seen clearly that the proposed ETCS and FCMD algorithms also reduce the encoding time greatly and bring about negligible loss of coding performance compared with Sanchez's [22]. On average, about 37.6% encoding time has been reduced with 0.31% BD-rate loss.

Detail performance analysis of the proposed overall algorithm
The experiment result of the proposed overall algorithm comparing with state-of-the art algorithms is shown in Table 7. On average, 53.8% encoding time has been reduced, resulting in 0.43% BD-rate loss on synthesised views. The largest BDBR increase only reaches 0.81%, it shows that the proposed overall method is more efficient in encoding all sequences. In terms of time saving, the smallest time saving is 37.3% for "Newspaper," the largest time saving is 68.8% for "PoznanHall2" and the time reduction of most sequences is concentrated between 40% and 65%. It reveals that our proposed method can achieve unobvious fluctuations in coding performance and time reduction. Zhang's [25] reduced 39.3% encoding time on average, bringing 0.53% BD-rate increase. Fu's [13] integrates the Restrict-PRO method into Zhang's [25], further reducing the complexity of CU partitioning. Fu's [13] achieved 50.1% encoding time saving with 0.64% BD-rate loss. Compared with the [13, 22 25], the proposed overall algorithm has reduced large the encoding time, while maintaining the same visual quality as the encoder. Therefore, it can be observed that the proposed algorithm achieves better coding performance than the other three algorithms in terms of the coding efficiency and the encoding complexity.

CONCLUSION
To cut down coding complexity of depth maps, a fast depth intra coding algorithm based on texture feature and spatiotemporal correlation was proposed. According to the D max and RDcost information of the previously encoded I frame in the same view, the D max of the current CTU is determined effectively. Then, a new texture complexity model was proposed by utilising GLCM and Sobel edge detection. Based on the proposed texture complexity model, the CU block was divided into two categories: smooth block, texture or edge block. On CU level, the ETCS algorithm for smooth CU was proposed to filter out unnecessary coding blocks. On PU level, taking PU's types and the prediction mode of neighbour PUs into consideration, the FCMD algorithm was developed to cut short the redundant candidate modes. Experimental results shows that the proposed overall algorithm achieves 53.8% encoding time saving