The state of the art of two‐dimensional digital image correlation computational method

After remarkable work over the last three decades, digital image correlation (DIC) computational technique has made incredible progress under the unremitting efforts of numerous researchers, and it has already grown into one of the most powerful and nondestructive surface deformations measuring methods used in various areas of experimental mechanics. DIC is done by tracking the regional variation of gray‐scale features between two succession images recorded before and after deformation. The objective of this paper is to report on the development of 2D‐DIC. In this paper, we first describe the basic principle of the DIC algorithm. The second part mainly focuses on some important progress and interesting problems of the algorithm in the term of computational efficiency and measurement accuracy. Moreover, the major error sources that may exist in practical DIC measurement are discussed in detail. Finally, some typical application domains of DIC method are listed. This paper intends to help preliminary researchers who are of interest in 2D‐DIC algorithms to improve their understanding of this measurement technique.


INTRODUCTION
In general, the traditional measurement system with the contact displacement/strain gages 1 is more suitable for small and uniform strain deformation of the loaded sample due to its high sensitivity and easy to manufacture. However, in the context of practice engineering, few loaded samples are subjected to strict uniform deformation, and heterogeneous deformation is more common. In this case, the strain gage locating at discreet points may be difficult to characterize the information regarding the region of concern. In order to address this issue and avoid the damage to the sample itself, the need for the full-field noncontact approaches is more urgent. After decades of development of noncontact measurement technology, many full-field detection methods based on optical pattern recognition come into beings, such as holographic interference, 2 electric speckle pattern, 3 speckle photography, 4 and digital image correlation (DIC). Among these optical techniques, DIC, invented in the early 1980s, 5-7 has many outstanding advantages, such as (i) relatively simple preparations of the measuring system and the lower requirement to the experimental environment. DIC usually illuminates with a white-light source, and laser light is not required. (ii) The high degree of automation in measurement processing

BASIC COMPUTATIONAL ALGORITHM OF TWO-DIMENSIONAL DIC
Before the measurement is performed using the standard 2D-DIC method, it is crucially important to make adequate preparation for the experimental setup. The basic structure of the typical 2D-DIC system is shown in Figure 1. It needs preparations of a speckle pattern on the surface of the test sample, a stabilized light source, and image acquisition hardware. In DIC measurements, the speckle pattern serves as a carrier of deformation information. More specifically, it is a random intensity distribution formed by a finite number of pixels, which could be naturally present or artificially prepared on the surface of the test sample. All deformation information is carried on the test sample's surface. Moreover, a good speckle pattern on the sample surface should be highly contrasted, stochastic, and isotropic. In the actual measurement, the natural characteristics of the surface of the sample often cannot be used as an effective speckle pattern. Therefore, depending on the surface characteristics of the sample material and the size of the sample being tested, several artificial methods have been put forward. Spraying black and white paints with an airbrush is a more common technique for generating the large size of speckles, 16,17 due to the usability and practicability of its operation. Lionello and Cristofolini 18 introduced the mathematical model of the airbrush gun and the main influential factors on the size of speckles, which demonstrates how to use an airbrush to produce a desired speckle size. During the deformation process of the sample, it is important to notice that the speckle pattern covered on the surface of the test sample should not change the material properties of the sample without peeling off, especially to biological soft tissue, 19 aluminum for detecting thermal properties. 20 If the size of the specimen is at micrometer grade, a specific approach is needed; for instance, etching the counterpart of the specimen to fabricate speckle patterns 21 and spinning an epoxy resin with powder to form speckle patterns. 22 For nano-scale test samples, various approaches, including deposited fluorescent nano-paper tracers, 23 beam lithography techniques, 24 focused ion beam, 21 and other technologies, 25,26 can be used to obtain a high-quality speckle pattern.
Additional, there are also other noteworthy matters in 2D-DIC measurement to ensure the measurement accuracy. For instance, the measured surface must be a flat plane, and the optical axis of the imaging lens should be adjusted to be perpendicular to the measured plane, and the light source is mild and stable. Moreover, for complex loading systems (eg, multiaxial loading, 27 thermal loading, 28 dynamic loading 29 ), no serious loss of speckles will occur during the loading process. The DIC generally consists of processing two successive images taken from different loading stages. The images are decomposed into small subset windows, which are also known as zones of interest. 30 The principle of the standard subset-based DIC computational technique is based on tracking the corresponding position between matching subsets in the reference/undeformed images and the target/deformed images. The standard DIC leverages specifically match algorithms to solve initial guesses with integer-pixel estimation, and then employ iterative nonlinear optimization schemes and certain interpolation to obtain the subpixel accuracy. Figure2 shows the basic principles of the standard subset-based DIC method.
To be more specific, a subset of (2M+1) × (2M+1) pixels contained a center calculation reference point P from the initial reference image. This point is used as an input and then finds its location in the target image. The first-order shape function is commonly used to represent the linear transformation, 31 as expressed in the form of where x * and y * are the final displacements of the reference subset point along x axial and y axial; u and v are the displacement components of the subset center P in the x axial and y axial directions; Δx and Δy are the initial distance between an arbitrary subset point Q and subset center point P; , , , are the displacement gradient components of the reference subset; and the basic form of the displacement of P is (u, v, , , , ) T . Figure 3 shows six different forms of linear transformation for the reference subset. In order to assess the similarity of intensity between the reference subset and the target subset, the justified correlation criteria method is carried out. The zero-mean normalized sum-of-square difference criterion (ZNSSD) and zero-mean normalized cross-correlation criterion (ZNCC) have been widely used in DIC analysis, which have been proved to be more robust correlation criteria since both the ZNSSD and ZNCC correlation criteria are not insensitive to the changes of brightness and image contrast. The relationship between the ZNCC and the ZNSSD is listed as the following equation: (2)

FIGURE 3
Six different forms of linear transformation for the reference subset The ZNSSD and ZNCC are defined as follows: where f(x,y) is the gray-scale function at coordinates (x,y) in the reference subsets of the reference image; g(x * , y * ) is the gray-scale function at coordinates (x * , y * ) in the current subsets of the current image; and f m and g m are the mean gray-scale values of the reference and deformed subsets, respectively, as shown in the following: ] .
The major step of implementing DIC is to use a nonlinear optimization scheme to achieve subpixel accuracy with a better match, and a specific interpolation is undertaken subsequently. It should be stressed that the finite element-based global DIC has been developed into a new global measuring method, which is distinguished from traditional subset-based local DIC; Pan et al 32,33 analyzed the experimental and theoretical errors of these two methods. Although the basic principle of DIC earlier gives a brief description, numerous papers have introduced other methods to increase measurement accuracy and efficiency to meet different measurement demands. Typical details of the 2D-DIC development process are as follows.

DEVELOPMENT OF 2D-DIC ALGORITHMS
In the last few years, the computational efficiency and measurement accuracy of the 2D-DIC computational algorithm have been significantly improved. The following part provides a brief overview of the latest research on these two aspects.

In terms of computational efficiency
With the increasing requirement of instantaneity and the increasing application of high-resolution/high-frequency cameras in experiments that involves DIC, the subsequent computational complexity brings a great challenge to DIC's computational efficiency without loss of accuracy. In the infantile stage of the algorithm, many schemes have been tried to speed up the correlation calculation for integer-pixel location, such as the coarse-fine searching scheme, 7 the nested fine search method for accelerating calculation, 34 and the Newton-Raphson (NR) algorithm. 35 As a result, the Hessian matrix 36 greatly improves the correlation calculation.
Generally, the total computational time of DIC displacement measurement can be separated into two major parts. One is the integer-pixel level displacement searching time, and the other one is the subpixel level registration time. By contrast, the integer-pixel deformation calculation of initial guess founded upon a robust and widely employed ZNCC coefficient is more time consuming. 37,38 To reduce the computational complexity of traditional ZNCC computation, many researchers have done a lot of research work. Huang et al 39 applied a newly derived fast recursive algorithm to simplify computations of the cross-correlation (CC) term arising in ZNCC and utilized a constructed global sum-table approach based on the fast normalized CC technique to calculate the remaining terms. Pan and Li introduced a fast DIC method, 40 which includes two specific approaches to eliminate the repeated redundant integer displacement searching calculations based on the NR-DIC algorithm. The simulated results showed that the computing speed using the fast DIC method could be increased by 100 to 200 compared to the traditional NR-DIC algorithm. In this method, a ZNCC coefficient was utilized in the calculation path to guide the computed points for reliable deformation evaluation in the ROI of images. Moreover, a global interpolation coefficient look-up table constructed for bicubic interpolation was utilized, which can decrease repetitive subpixel interpolation calculation. Peng et al 41 proposed a new modified correlation criterion, namely, the variables-based sum of squared differences (VSSDs). Based on the SSD criterion, variables (a, b) were introduced into the VSSD criterion to consider the influence of the light variation on deformation measurement. Compared with the ZNSSD, it can reduce the calculation time with no loss of accuracy. In the case of light changes, the VSSD is defined as In 2013, Pan et al developed a fast and robust DIC computational algorithm, which is called the inverse compositional Gauss-Newton (IC-GN) algorithm to eliminate the redundant computations. 42 Specifically, the innovation content mainly includes the following elements. (i) The IC-GN computational algorithm was improved by introducing a optimize ZNSSD criterion and a linear displacement mapping function. (ii) An improved reliability-guided displacement tracking strategy is utilized to provide the estimate of initial guess, and an interpolation coefficient look-up table approach is employed to perform gray-scale intensity interpolation between redundant subpixels. Experiments have shown that the proposed method is about 3 to 5 times faster than the forward additive Newton-Raphson. Based on IC-GN, a seed point-based parallel DIC computational method was suggested by Shao et al to get a better rate. 43 The use of deformation of the seed points is to obtain an improved initial guess for its neighbors. After the first seed point is computed, the remaining seed points are computed simultaneously in different threads. The result of calculation for each seed point above a given threshold can be treated as a new seed point for its neighbors. Besides, the parallel DIC technique was also employed with various child threads 44 to make full use of computing capabilities of multicore computers. Although several real-time 2D-DIC measurement methods have been reported 45,46 recently, these methods were based only on multipoint measurement mode, not substantial full-field measurement. In short, they need to develop further.

In terms of measurement accuracy
The large interframe deformation problem between the reference image and the target image, that is, the image measuring system with inadequate frame rate, usually takes place in the situations like an explosion, collision, high-speed motion, or fast rotation, where the traditional standard DIC algorithm will lead to a great calculation error.
The corresponding searching reliable initial guess calculation method plays a significant role in guaranteeing the accuracy of DIC analysis. Zhao et al 47 used an intelligent initial guess calculation method based on three population-based intelligent algorithms (PIAs), including genetic algorithm, differential evolution, and particle swarm optimization. To determine the initial situation of the first point defined in the ROI for quasi-Newton (QN) method, 48 the iteration formula of QN can be described as follows: where ⃗ p is an approximation after iteration, ⃗ p 0 represents an initial guess, > 0 is the step size, H is the Hession matrix, and ΔC(⃗ p 0 ) denotes the gradient of the correlation function. In this method, three improving strategies (ie, Q-stage evolutionary strategy (T), parameter control strategy (C), and space expanding strategy (E)) were also proposed to incorporate PIAs, and DE-TCE was proven to be more effective.
Given the discontinuous displacement of the ROI of the speckle image and the strain field or displacement jump (eg, crack profile, frictional interface, or other fracture area), it typically takes a significant amount of time to time to exclude discontinuous areas from the entire set of images, and the points of discontinuous regions oftentimes lead to poor correlation quality. Various related algorithms were built when dealing with cracks. Other works 17,49-51 introduced the methodology of extended finite element method into the DIC computational algorithm, so a method called extended DIC (X-DIC) was proposed. The X-DIC method is a global correlation algorithm that supports tracking the heterogeneous nodal displacements with arbitrary discontinuities at one time. Helm 52 presented a modified method based on the NR algorithm to address the issue of analyzing multiple-crack specimens. A technique called two-step X-DIC has been developed to satisfy the requirement of full-field displacement measurement in the crack domain. 53 An approach, namely, subset splitting, was proposed by Poissant and Barthelat 54 to deal with other discontinuous displacement situations. The speckle subset is defined into two parts, including the "master section" consisting of the subset center pixel point and most of the pixels, and the "slave section" divided by intersecting line splits that discarded from the correlation coefficient calculations. The search for the best-fit of the mapping of the master and slave sections is processed by using the NR method until the splitting tolerance is met, or until the maximum number of iterations is reached to 5 for most subsets. The corresponding NR iteration method is expressed as follows: where [H] is the Hessian matrix; and ∇C is the gradient of Hessian approximation put forward by Vendroux and Knauss. 36 In addition, it is worth pointing out that a reliability-guided displacement tracking (RGDT) strategy method was firstly proposed by Pan. 55 The core idea of the path-dependent RGDT-DIC technology is to use the highest ZNCC coefficient to guide the deformation parameters of neighboring points, thus ensuring that the calculation path is performed in a reliable direction and avoiding possible error propagation to a large extent. To improve the performance of the RGDT strategy, Pan and Tian 44 further studied. In the improved RGDT-DIC, successfully calculated points give the same priority to extend simultaneous calculations to their neighbors.
Other techniques, such as interpolation function, are also crucial to improving measurement accuracy. Luu et al 56 where n is the order of the B-spline kernel. In this literature, the inverse gradient weighting technique approaches the B-spline interpolation family, and it can enhance the DIC analysis performance at those points approximating the image gradient by selecting appropriate coefficients. The prefilter and pastfilter theories are introduced to the B-spline interpolation. It is noticeable that the B-spline interpolation prefilter is applied using the recurrence relation. Numerical simulation experiments revealed that the suggested B-spline interpolation calculation speed is almost equal to the commonly used bicubic interpolation algorithm. 58 In the last few years, the measurement accuracy of the 2D-DIC system has been greatly improved to benefit from these theories and studies. The resolution of computed displacement with a typical DIC method is around ±0.1 to 0.01 pixel. 59 Even the bias of computed displacement can achieve 0.004 pixels. 60 When the calculation results of high-precision measurements are required, the design of the algorithm and details of setting the user preferences and image acquisition process are needed.

THE MAIN ERROR SOURCES IN 2D-DIC MEASUREMENT
There are several nonnegligible errors affecting the accuracy of DIC measurement, such as algorithmic matching errors, subpixel interpolation error, and image distortion. The main sources of error are summarized from two aspects: the DIC analysis algorithm and the image acquisition process.

Errors caused by DIC analysis algorithm
The errors caused by DIC analysis algorithm is a hot topic all the time. This section introduces the analysis of errors caused by chosen shape functions, correlation criteria, interpolation schemes, and size of subsets.

The effect of shape function's order
It is well known that the zero-order shape function provides mapping only for the planar motion of a rigid body, whereas the second-order partial differential can depict the affine transformation, including translation and rotation and shear and normal strains. For an irregular deformed subset, a shape function combined with quadratic terms can be used. Lava et al 61 introduced higher-order terms into the subset shape function to describe serious heterogeneous strains. They also recommended that the second-order subset shape function is a clear prerequisite for reducing systematic errors.
In summary, one key issue is to choose the most appropriate shape function for the uncertain deformation of the current subset to minimize the measurement error. Schreier and Sutton 62 investigated the systematic error due to mismatch of shape functions. Yu and Pan 63 revealed that the overmatched subset shape functions would not add additional systematic errors but may create additional random errors by numerical translation tests and highly recommended second-order shape functions as a default. Xu et al 64 proved that the second-order shape function is more suitable than the first-order shape function in local deformation. In general, the actual degree of deformation of the subset cannot be explicit in advance, so it is more conductive to simulate the deformation in advance than to select the appropriate shape function.

The effect of correlation criterion
In order to evaluate the similarity of intensity pattern between the reference subset and the target subset, various correlation criteria, including CC, sum of squared difference (SSD), parametric SSD (PSSD), and the corresponding derived parameters, have been proposed and used. 36,65,66 It has been argued that ZNSSD correlation criterion has become one of the most common correlation criterions.
In 2010, Pan et al 67 investigated the interrelationship of three widely applied correlation criteria, including ZNCC criterion, ZNSSD criterion, and PSSDab criterion with two additional unknown parameters. In this study, both theoretical and experimental results indicated that these correlation coefficients are equivalent. It is also indicated that the PSSDab criterion with two additional unknown parameters is a slight improvement in efficiency. In 2012, the variables-based SSDs (VSSDs) was turned out to be more advantageous in the varied light environment compared to ZNSSD and ZNCC. Recently, a new correlation criterion, so-called weighted ZNSSD, 68 combines the classical ZNSSD with a self-adaptive Gaussian window function, which was proved to be not sensitive to the subset size and can provide more reliable and accurate displacement information.

The effect of interpolation scheme
To achieve subpixel accuracy, the intensity interpolation scheme plays an important role because the use of interpolation scheme is to reconstruct the gray values and gradient for the subpixel positions. In principle, the sinusoidal function is an ideal interpolator, but it cannot be directly used for numerical calculations, so it is substituted by polynomial interpolation. Typically, B-spline interpolation and bicubic interpolation families have been widely applied in the classic NR-DIC algorithm. Schreier et al 69 investigated the systematic errors due to intensity interpolation in details as early as 2000. Studies have shown that high-order interpolation schemes should be given prioritized as it is beneficial to reduce systematic errors. In 2009, a numerical study was conducted, founded on plastic FEA by Lava et al. 61 They compared the bicubic intensity interpolation with the bilinear interpolation. Results showed the accuracy of the bicubic intensity interpolation routine has been greatly improved by 50%. In 2015, Baldi and Bertolino 70 studied the effects of various polynomial kernels (including the Lagrange polynomials, B-splines interpolants, and cubic interpolants) on the systematic errors and standard deviations of displacement data obtained from DIC. In this literature, the cubic B-spine polynomial was demonstrated to be the best performing kernel compared to others. Gao et al 71 proposed a novel experimental method for measuring the interpolation bias due to subpixel displacement. This technique can effectively suppress the mechanical error of the translation procedures and out-of-plane displacements on the analysis of the interpolation error. It is not hard to find out that the newly developed interpolation function has better performance.

The effect of subset size
In the subset-based DIC method, the subset size, namely, template size, is a basic parameter entered by the user, which has a significant impact on the accuracy of the measurement results, as illustrated in the papers. 36 The size of the subset chosen should be large enough to contain sufficient and unique features to achieve a reliable match between nonuniformly deformed local subimages. However, an oversized subset can cause systematic errors and add cost time consuming, and larger subset size allows greater DIC precision but reduce the spatial resolution of the measured data field. As such, the spatial resolution and measurement precision of DIC is limited by the resolution of the camera. Yaofeng and Pang 72 held that choosing the optimal subset size requires a tradeoff between random errors and systematic errors. Remarkably, the selection of subset is closely related to the speckles' quality and the studies on speckles' quality assessment 73-79 provide a reference. Moreover, Xu et al 64 studied the effects of various shape functions on the selection of subset size. They pointed out that employing a first-order shape function with a large subset size can obtain a better result in uniform deformation, and a second-order shape function with a moderate subset size is more suitable for high gradient deformation. It is a trend to use the self-adaptive scheme to set the optimal size of subsets, 80 instead of choice according to the operator's experience and intuition.

Errors in image acquisition process
In the course of obtaining the reference and target images, the 2D-DIC system faces the question that to avoid the influence of disturbance due to the existence of adverse factors affecting measurement precision in physical experiments. The potential error sources can be summed up as follows: (i) the quality of speckle pattern; (ii) image distortion caused by the geometric distortion of the imaging lens, motion of the sensor, and current camera operating status; (iii) the experimental environment, such as fluctuating light conditions; and (iv) out-of-plane motion of the test sample surface. Although 2D-DIC technology has been developed for over 30 years, its accuracy still depends to a large extent on various factors in the image acquisition process. In short, the need for high-precision DIC measurements should take particular note of the aforementioned details.

The effect of speckle pattern quality
The quality of the speckle pattern is fundamental to the DIC measurement accuracy. A reasonable choice of DIC parameters to assess the quality of speckle patterns over the years has received wide attention. The application of assessment parameters can be attributed to two purposes: guiding the preparation of the speckle pattern on the surface of the test sample and increasing the forecasting accuracy of the DIC measurement. In the broadest sense, known performance metrics can be classified into global and local parameters.
Local parameters can reflect the local quality of the speckle pattern on the basis of individual subsets. Two local assessments, including subset entropy 72 and the sum of the square of subset intensity gradient (SSSIG) whose orientations are horizontal and vertical, 73 use different modes to judge the quality of the subimage pattern. The former is the average of absolute intensity differences of eight neighboring pixels, and the other derives the theoretical model from the SSD correlation algorithm.
A global indicator is used to directly indicate the whole image information of the speckle pattern for DIC. The mean speckle size as a simple and global assessment, applying the morphology features extraction method, was proposed by Lecompte. 74 The concept of the mean subset fluctuation was presented by Hua et al. 75 This is formed by the mean sum of a pixel and its eight adjacent pixels from subset to subset to reflect gradient trend of the speckle subset. Using Shannon entropy assessment was introduced by Crammond et al. 76 In the work of Yu et al, 77 an integrated approach was used, which takes two statistical variables to estimate the entire quality of speckle patterns. The two evaluation parameters used are the mean intensity gradient 78 and the mean intensity of the second derivative, 77 both of which are global parameters. More recently, a new global parameter called the root mean square error was proposed by Su et al. 79 It is worth noting that it takes the total error comprised of systematic errors and random errors into consideration. At the same time, the aim to provide a standard speckle pattern without losing efficiency characteristics is also a hot topic for researchers.

The effect of image distortions introduced by the camera itself
In the general experimental environment, the image distortions of the sample mainly depend on the self-hardware of a digital camera. Errors due to lens distortion and self-heating have been in-depth researched, and many techniques have been put forward to alleviate the distortion effect on the accuracy of DIC measurement systems in the late years.
Lens distortion is one of the main causes of image distortion. In high-precision DIC measurement systems, the degree of the image distortion caused by lens distortion is strongly related to the quality of the imaging equipment. In general, measurement errors in macroscopic measurements may not account for the impact of lens distortion with an appropriate combination of a CCD sensor and high-quality lens. Lava et al 81 emphasized the negative effects of lens distortions on the determination of large strain gradients fields at microstrain scales (≤1000 um/m), and implied the necessity of camera calibration and distortion compensation for the precise acquisition. To deal with the problem of image distortion to restore fidelity image, several proved approaches in connection with 2D-DIC systems have been presented. [82][83][84] In 2014, Dufour et al 85 developed a novel method that incorporates the integrated DIC into extracting the parametric description of the distortion fields. In addition, self-heating is a very common phenomenon for the cameras that are in a working state. After the heating-up stage about 1 to 2 hours, the internal temperature will reach ready. The increase in temperature caused by self-heating not only brings darker current noise but also causes thermal expansion of various supports for connecting camera sensor, resulting in further slightly movement of the sensor current and subsequent image expansion and translation. In short, self-heating can cause severe and complex image distortion. Test with six digital cameras conducted by Ma et al 86 showed that the strain error rate is approximately 8 to 20 με • C −1 and the maximum total strain error up to 230 με is large enough to be considerable. In the case of the long-time operation of the digital camera, it is necessary to preheat the digital camera to the thermal balance stage before the experiment. Recently, Pan et al 87 reported that a high-quality bilateral telecentric lenses could solve the self-heating problems of digital cameras while maintaining negligible lens distortion. This method was also suggested for use in DIC, then use other techniques to compensate for the measurement results. However, the side effect of using high-quality commercial telecentric lenses is particularly obvious, such as fixed field-of-view, high cost, and limited depth of focus. To correct the error, Pan et al 88 subsequently proposed a generalized compensation method with a low-cost and high-accuracy 2D-DICsystem. Up to now, the use of the high-quality camera is still a primary solution to minimize errors introduced by image distortions in 2D-DIC measurements.

The effect of noise caused by the experimental environment
Speckle images are inevitably polluted by various kinds of noises, mainly due to illumination fluctuations and acquisition hardware, such as thermal noise, readout noise, and cutoff noise. These noises will have a direct influence on the accuracy and reliability of image registration and subpixel interpolation. In general, the application of high-quality hardware (low-noise camera) is a straightforward and effective way to decrease or suppress the effect of image noise. Another simple approach is to average the image date in each step measurement. 89 Typically, an average of 100 images is sufficient to suppress the noisy signals of the image and ensures that the results are as good as those recorded by high-quality cameras. 71 Latterly, presmoothing the speckle images using Gaussian low-pass filter with a kernel of 5 × 5 pixels before correlation analysis is proposed by Pan. 90 In this study, both numerical simulations and experiments have successfully revealed that the proposed method can reduce the bias error of DIC.
Apart from the random errors from image noise, the illumination fluctuations are another unfavorable factor that may occur in the actual experiments. It is generally known that the ZNSSD and ZNCC criteria are insensitive to fluctuations in white light illumination. Jerabek et al 91 proposed a concrete way for preventing overexposure of the optimal image contrast corresponding to the maximum light intensity, which is determined by compromising two characteristic parameters: the mean value of strain standard deviations (SSDs) and the standard deviation of mean strain values. Therefore, the image noise generated during static and quasi-static experiments is easier to handle than that one in dynamic processes, and the problem of reducing image noise in real-time experiments requires additional analysis.

The effect of out-of-plane motion of specimen surface
When researchers use 2D-DIC, the out-of-plane motion of the sample surface is considered to be one of the most important sources of potential error. The sample surface regarding the imaging sensor will inevitably move relatively to some extent. These movements include out-of-plane forward, backward translation, and out-of-plane rotation during loading. That is to say, the distance between the measurement sample and the camera has changed in the actual measurement process. Small out-of-plane motion alters the compliance with the principle of the pinhole imaging model and then leads to an additional error. It is usually considered that the higher (lower) the displacement gradients, the higher (lower) the sensitivity to out-of-plane motion, especially in strain measurements.
The study by Haddadi and Belhabib 92 had shown that an out-of-plane translation of 1 mm brought strain measurement error up to 2 × 10 −3 mm −1 when the relative distance between the surface of the test sample and the camera was about 21 cm. Another literature 93 demonstrated from the side that the device with the telecentric lenses is a good solution to limit the error caused by out-of-plane motion. It is noteworthy that a 2D camera using a telocentric lens in testing with rotation ≈ 20 • reduces the error from 1250 μs/mm to 25 μs/mm compared to a standard lens. Recent work by Pan et al 87 confirmed that the bilateral telocentric lens was scarcely affected by the out-of-plane motion of the sample surface. This is validated by the fact that the longer focal lengths indirectly cause a smaller error in out-of-plane motion. Hoult et al 89 illustrated five potential solutions for the correction of strain error caused by out-of-plane motion, denoting that it is possible to control the mean strain error within 5 με. In brief, the distance between the camera and the sample to be tested can be appropriately increased to limit the error caused by the out-of-plane motion, and the approach using an telocentric lens can be considered to be the same as the above principle. If the large or superimposed out-of-plane motion of the sample surface occurs during the experiment, it must be mentioned that a stereovision system (3D-DIC) scheme is also a feasible solution.

APPLICATIONS OF 2D-DIC ALGORITHMS
Over the years, the application of 2D-DIC technology has made great strides. The DIC method can be used to analyze various mechanical constants of material properties and obtain several descriptive parameters, such as the coefficient of thermal expansion (CTE), 94 Young's modulus(E), 95 Poisson ratio, 96 stress intensity factor (SIF), 97 Williams expansion 98 and J-integral. 99 At the same time, DIC has become one of the most important practical tools for the understanding of fatigue crack propagation. 100,101 Additionally, the DIC method can also be used in dynamic measurements. For example, strain localization of the dual-phase high-strength steel was characterized under dynamic loading at a strain-rate between 150 to 600/s. 29 From the viewpoint of material types, typical applications are listed. The research was employed in various classes of metallic materials, such as the damage mechanisms of duplex stainless steels (DSS). 102 This method was also applied to polymer analysis. 103 Regarding the biological materials cell, 104 studies on bone mechanical strain by the use of DIC have also been reported. 105,106 With the long-time integration of technology, improved DIC can be used in a harsh environment condition. Under high-temperature conditions, Grant et al 107 used DIC in combination with filters and blue illumination to access Young's modulus and thermal expansion coefficient of a nickel-base up to 1000 • C. A noncontact high-temperature deformation measuring system using RG-DIC was designed by Pan et al, 108 which can accurately reflect the thermal deformation of metals and alloys in environments with the temperature below 550 • C. According to a study by Guo et al, 109 a stable speckle pattern generated by spraying tungsten powder at 2600 • C was introduced into the measurement of the thermo-mechanical properties of carbon fibers.
The DIC computational method is also a versatile and flexible technique for multiple-scale measurements depending on the magnification ability of the observation tool used. In addition to the traditional photogrammetry for macroscopic detection, fine deformation information at the micro/nano scales can be obtained by using DIC coupled with an optical microscope (OM), scanning electron microscope (SEM), atomic force microscopy (AFM) or scanning tunneling microscopy (STM). Especially in recent years, much effort has been put on transferring this technique to applications at a smaller and smaller scale. For example, the DIC system with an attached microscopic lens to characterize the strain information of single aluminum crystal during a micro-tensile test was investigated in the paper. 110 Salvati et al 111 used a depth profiling FIB-DIC ring-core method to non-equi-biaxial RS state. In terms of a combination of DIC and SEM, many efforts have been made. Tanaka et al 8 succeeded in using the combination to measure the heterogeneous strain field of the nickel-based super alloy at the subgrain level. Lindfeldt et al 112 analyzed in-situ obtained SEM images of pearlitic steel by employing DIC. Kammers and Daly 26 provided a thorough survey of small-scale patterning methods for SEM-DIC combined with SEM. The application of the DIC technique inside the SEM to access strain fields at the spherulitic scale of polypropylene was developed by Pinto et al. 113 Remarkably, SEM images exhibit artifacts, which are mainly divided into spatial distortion, drift and scan line shifts. This can lead to considerable errors. Maraghechi et al 114 reported an extended global DIC framework to deal with the scan line shifts. Maraghechi et al 115 conducted an extensive study on the artifacts in SEM images based on DIC and proposed a unified general framework to correct for the three dominant types of SEM artifacts. Alternatively, the deformation information of microstructures can also be extracted from STM 36 and AFM, 116 their related application will not be enumerated one by one here. The DIC is successfully developed into a powerful deflection methodology, and it has great application prospects for its feasibility.

CONCLUSIONS
As an ascending trend in image processing technology, DIC has become an unreplaceable tool for surface destruction measurements and strain analysis, and with the continuous improvement of the quality of industrial cameras, the performance of the 2D-DIC systems on the measurement accuracy has been further improved. At the same time, time-dependent analysis of the 2D-DIC method may be necessary, especially in the field of damage monitoring. This paper systematically summarizes the important improvements in terms of efficiency and accuracy of the 2D-DIC algorithms over the last two decades. When recalling the DIC development process, DIC method can be subdivided into a subset-based local DIC and a finite element-based global DIC. Today, the subset-based DIC has been widely applied in commercial programming and is considered the mainstream DIC matching algorithm. Moreover, the error factors affecting the measurement accuracy of DIC systems to varying degrees, including the parameters defined by users, the variation of speckle patterns, the quality of experimental devices, and the environmental conditions, are introduced. However, the complexity of its research has led to the immature study of overall error estimates, which still needs to be explored further. Additionally, it is worth noting that a self-adaptive parameter setting technique for DIC is also developing.