Polynomial edge reconstruction sensitivity, subpixel Sobel gradient kernel analysis

In digital image processing the accuracy and precision of edge detectors within noisy temporal environments can be imperative. In this article, a report on the polynomial reconstruction of 22.5° Sobel kernels using additional points through imaging system noise are analyzed. In comparison to the accuracy of first‐order interval kernel masks, the first‐order 45° kernel approximation using 5‐points provides consistent stability in accuracy and precision across pixel and subpixel gradients. For a 22.5° sampling interval, a 3‐point polynomial evaluation minimizes the error of the subpixel orientation of the gradient. These characteristics are derived from the retention of kernel symmetry using noninteger coefficients. Critical to gradient precision is the distortion of the evaluated points due to system noise or their value away from second‐order symmetry. For a refined measurement estimate, sensitivity up to 10−5 is demonstrated.


INTRODUCTION
Due to the discrete nature of digital image processing, edge precision is fixed by the resolution of the imaging equipment. Furthermore, the capabilities of preprocessing algorithms to extract precise edge direction feature information must also be considered. Subpixel edge detection is a process of labeling detected edges in-between discretely sampled points of an image. Using either a first order, 1 second order, 2 or hybrid level edge detector 3 horizontal and vertical gradient directions are accurately detected. However, diagonal gradients are approximated, and the detection of finer gradient orientations results in poorer approximations. There are three general approaches to subpixel edge detection: curve fitting, 4,5 moment-based, 6,7 and reconstructive. 8,9 In curve fitting methods, an estimation of a discretely sampled border is required to fit a curve. This methods accuracy is determined from a chosen detector's pixel error providing the prior knowledge of the edge. Moment-based approaches calculate intensity and spatial moments in a pixel neighborhood. These are formed as a vector corresponding to the regions center pixel. An advantage of this approach is that there is a moment vector for every pixel. However, pixels defined as an edge can only be reliably associated to the edge in a small neighborhood. Recon-structive methods generate a continuous gradient function by interpolating discretely sampled edge points. As with curve fitting, the discrete samples are obtained via first/second-order and hybrid edge detectors. Critically, the error the interpolant minimizes is fixed by the pixel accuracy of the edge filters. Of these methods edges are initially coarsely defined using weighted kernels at 45 • intervals. The discretely sampled edge defined at the horizontal, vertical, and diagonal orientations places an immediate restriction on the accuracy in the extraction process of the subpixel feature. Dependent on the application [10][11][12] and operating environment, the accuracy of the subpixel gradients may be critical to the imaging process.
The achieved level of sensitivity in subpixel edge detection maybe beyond those required for many existing applications. In computer vision we seek to replicate all aspects of biological vision, such that edges are the critical feature in visual processing tasks. Examples include, object segmentation, 13 object motion detection, 14 object characterization, and object classification. 15 Whereas traditional edge detection algorithms that operate at the pixel level are numerous within research. This work proposes a standard first-order edge detection algorithm with capability to measure below the pixel resolution for applications operating at the micron scale across arbitrary orientations. The model developed is based on a polar mapping of noninteger coefficient weights. Whereby, a coarse estimate of an edge measurement is refined by reconstructing the edge as a parabola. Key findings of this initial analysis reveal that the optimum size of the kernel mask, to reduce subpixel error, changes as an edge's gradient approaches the horizontal. Furthermore, based on gradient interval and polynomial sampling point configurations, minimum error estimates are established due to distortion of the parabola's symmetry. In brief, the proposed method is comprised of four general steps: (1) generate a set of orientated filtered images, (2) inspect filtered images for a local maximum, (3) generate an optimum edge image map, (4) apply a polynomial at discrete points along the optimum edge function.
Key points of traditional edge detectors derive from tradeoffs between edge localization, noise sensitivity, and complexity. Edge localization is determined either from the gradient maximum of the first-derivative or the zero crossing of the second derivative. The zero crossing eliminates the potential source of error in detecting edges using a global maximum or by setting a threshold. Noise in second derivative methods is often problematic due to large rates of change between sampled points. Smoothing is typically applied to the input function to reduce the sharp transition of edges, such that increased Gaussian spread removes edge detail. Hybrid edge detectors, such as the Canny algorithm, 3 seek to determine the probability that a pixel is part of an edge. The edge image is typically recovered using first-order methods via an image dependent threshold interval. This algorithm has increased complexity but improved localization and lower noise sensitivity. For a comprehensive review on the plethora of traditional edge detection techniques readers can refer to Reference 1. Whilst this work is proposed for applications with limited hardware, such that the number of preprocessing steps should be minimal, deep learning algorithms to accomplish edge detection 16,17 are not appropriate here. Therefore, the literature review begins with consideration of the state-of-the-art in edge measurement precision. However, for a one-shot image analysis process at the expense of additional layer(s) it is reasonable to assume that the algorithm may form part of the preprocessing layers in a convolutional neural network 18 to perform subpixel edge detection. Most importantly, precision in edge detection is enabled by defining a pixel as a combination of a finer sampled grid (linear filtering assumed). It is established that there is a tradeoff between resampling of the original signal, measurement accuracy, and computational cost. Whereby, sampled points on an edge surface can be interpolated using methods described in References [4][5][6][7][8][9] . Since images are usually mapped onto a Cartesian grid typical distortions are rigid bodied. Whereas translation can be accurately modeled, the discrete recovery of orientation is not so simple. Orientated sets of edge filters (modeled on biological vision) are Gaussian-based, examples include Laplacian of Gaussian 1,2 and Gabor filters. 19 Two features of the Gaussian function are symmetry and peak maximum energy. Whereby, greater weight in the convolution operation to pixels or edge kernel weights at the center, easily suppress weaker edges. The Laplacian of Gaussian, a second order (zero-crossing) filter, is susceptible to noise variances amplified by continual differentiation. Furthermore, without surface interpolation or increases in kernel size, the kernel is robust in detecting edges but simply a discrete approximation to numerical differentiation and confined to the pixel nature of the image coordinate map. The Gabor filter, typically applied in texture analysis, 20 can be described as a Gaussian modulated by a sinusoid. One disadvantage of the Gabor filter is determination of tuneable parameters related to scale (P) and orientation (Q) of the sinusoid and the spread of the Gaussian. These parameters are application dependent and the number of filters increases as a product of the number of scales and orientations. The standard complexity of Gabor filtering is: O(PQM 2 N 2 ). Edge localization is controlled by the Gaussian spread, such that a narrowing spread increases noise sensitivity. One aspect of research into Gabor filtering investigates methods to increase real time operation. In Reference 21, the authors address this issue by implementing a genetic algorithm to iteratively optimize the Gabor parameters to reduce the size of the filter bank for accurate deformation measurements in glass production. However, orientation measurement sensitivity is limited to the sampling interval Δ of the input by: Δ/2. For arbitrary orientation measurement this is problematic if the sampled point does not lie on a discrete point that convolution is performed on. This error is typically reduced by interpolating between missing values or resampling the input image. In Reference 22, the sources of this type of error are discussed. In relation to the proposed method, a first-order differentiation approximates an edges orientation. Moreover, the accuracy of the proposed method is developed from established limits of polynomial interpolation (Lemma 1) and sampling. Lemma 1. An even polynomial describes a symmetrical correlation peak, whereby a second-order polynomial will suitably locate the peak. Higher-order terms may increase the SNR but not the accuracy of the peak location. This is equally applicable to Gabor filtering where accurate orientation measurement may be required. Therefore, we think it remains desirable to implement reduced complexity subpixel edge detection algorithms with high precision and accurate measurement capability into computer vision systems. Albeit subjective and dependent on an applications requirement for accuracy, we propose a solution that capitalizes on the advantages of first-order edge detection and minimal kernel sizes whilst achieving micron scale accuracy. Measurement precision is analyzed based on typical noise sources encountered in optical imaging systems. Applications that we envisage the proposed method may be useful for are the accurate registration between two images, nonrigid/rigid body measurements and defect detection. [23][24][25][26] In Reference 25, the authors demonstrated the use of digital correlation to investigate localized thermal deformation within semiconductor devices. It was shown that minimizing alignment errors between images was a crucial preprocessing step before applying correlation. In particularly, the effect of defocus on rigid body misalignment. A benefit of using kernel masks of increasing differential orders allow edges to be detected to provide magnitude and directional information based on local maximums. By including a smoothing filter, a decrease in noise sensitivity is also achieved. However, for reconstructive subpixel edge detection a coarse estimation of pixel gradients at 45 • intervals, usually based on the Canny detector 3,27 and implemented variants, 28 has only been considered previously. In this article, we report on the implementation of polynomial edge reconstruction based on noninteger configurations of a subpixel Sobel gradient kernel. The advantage of this approach is that subpixel Sobel gradient kernels are estimated to further refine the initial coarse estimation of the sampled edges. To maintain lines of symmetry in the subpixel masks, the extra kernel weights not defined by a standard kernel are parameterized. Whilst it is advantageous to increase the gradient interval of the kernel, signal dependent processes such as operating environment, the image acquisition system and choices of prefiltering require consideration. Within such limits, the sensitivity of polynomial edge reconstruction and the number of sampled points to refine measurements, obtained using a 22.5 • gradient kernel, are investigated. First, using the gradient vector sum an approach to calculate noninteger coefficient weights is identified. Second, the algorithm to test the edge kernel is presented before experiments are detailed and results discussed. We compare the performance of the subpixel arrangements against a standard Sobel 45 • interval subpixel reconstructive algorithm and the measurement sensitivities of some interesting applications. Detector noise is characterized by white and low frequency noise distributions, a limitation of the error analysis in this report is that noise distributions affecting low and high frequency components are generalized. It is acknowledged that temporally stochastic noise sources that interfere with the detector can be more accurately modeled. 29 It is important to note that the results reported on are preliminary and are intended to illustrate the salient points of the method to detect subpixel gradient directions.
This article is organized as follows: in Section 2 issues of reconstructive subpixel edge detection are discussed and a solution based on Sobel kernels is presented. Sections 3 and 4 introduce the experimental method: imaging system noise and test patterns. As well as, experimental results and discussions thereof. Section 5 summarizes the proposed kernel configurations in subpixel edge detection.

PROBLEM FORMATION AND THEORY
First-order edge filter kernels are bound by the pixel nature of the signal they operate on, whereby the gradient interval of a typical edge operator is 45 • and the minimum kernel size that achieves this interval is 3 × 3. A common kernel formulation of this type is the Sobel operator. 30 This is widely used as either a standalone detection algorithm or as a prior process in hybrid edge detection algorithms due to its overall performance in detecting gradient orientations across four compass directions (eight if symmetry is included). Maintaining a line of symmetry in the differentiator is critical, and as such, two issues exist in this simple technique to further increment the gradient interval to 22.5 • . First, the outer radius of this kernel is completely defined, whereas an inner radius is not. This reduces the number of points sampling a possible edge feature. Second, the interval at 22.5 • over the unit circle is approximated to 26.6 • on a Cartesian grid. Consider the outer radial component of a kernel, the gradient kernel interval for Δ = 22.5 • is not symmetrical between edges at 0 • and 45 • . Therefore, the detected gradient accuracy would be within error of 4.1 • . A continuation of this error into the inner radial component does not exist because the point location is not uniquely defined by a pixel. For increases in kernel size, the tangential approximation over a Cartesian grid remains. Whilst integers are easier to handle, kernels can be defined using noninteger coefficients based on the exact coordinate on the unit circle. Obtained from the dot product of coordinate pairs located on the unit circle, a projected gradient can be approximated. This method of kernel mask generation for = 0 • , 90 • is demonstrated in Appendix A. Using the Sobel method and a reconstructive process, the investigations that follow evaluate the legitimacy, following the vector gradient method, of using noninteger coefficients to recover 22.5 • gradients. First, a noninteger Sobel kernel is defined, where kx i and ky j are the indexed Cartesian distances from the center of the kernel and r, is the radial component at the Cartesian location. For = 0 • , 90 • the vector components G X and G Y become Since r is a factor of kx i /(r) 2 and ky j /(r) 2 , where ⟨⋅, ⋅⟩ is the coordinate of the radial distance from the center of the kernel grid, the inner products can simply be expressed as kx i /kx i 2 + ky j 2 and ky j /kx i 2 + ky j 2 . This results in the arbitrary kernel gradient direction algorithm, For gradient orientations 0 • , 22.5 • , 26.5 • , and 45 • , and kernel dimensions 3, 5, Figure 1 demonstrates kernel generation following Equation (2). The gradient images are enlarged to 15 × 15, where x and y are the spatial frequency coordinates, for greater visual clarity of the kernels line of symmetry.

Detecting subpixel gradient directions
To build on the discussion highlighting consequences for the discrete assembly of an edge kernel, this section introduces the proposed model. Either from a single pixel or region of pixels from an image, the magnitude of an edge is extracted using gradient kernels calculated from Equation (2). The gradient direction measurement is estimated by fitting the sampled points across a set of orientated filters. This corresponds to a gradient profile that can be evaluated with a second-order polynomial to refine the gradient measurement. Whereby, polynomial fitting is a general method to improve the peak or trough locations from a set of discrete data. Further context towards the application of polynomial fitting to reconstruct an original function is given in Appendix B. Individual steps in the method to detect, extract, and measure gradient direction sensitivity, are as follows.
• Input image: u(x, y) • Determine properties of the input image, including the image dimensions, M × N, the average signal strength (u) and the variance of signal strength 2 (u); if the variance is small compared to average signal, that is, 2 (u) < < (u) then the picture may not be worth pursuing • Apply gradient filters: D(i, j) • Construct edge location map: Lmap(x i , y j , k ) • Extract local maximum: Ir(n i , n j ) • Construct optimum (local maximum) edge image: Eopt(x, y) • Apply polynomial fitting to each optimum edge image point: Equations (3) and (4) An input image u(x, y) is convolved with a set of gradient filters, D (i, j); 0 • ≤ Δ ≤ 180 • . As shown in Figure 2A, each filtered response is concatenated to obtain a 3D matrix of gradient images: Lmap(x i , y j , k ). Along the direction of orientated filters k (k max = /Δ ), a local region, Ir(n i , n j ), is defined to extract some useful property to compare the Gradient profile, location of peak, and sampling points to evaluate maximum edge gradient orientated filters; in this case, the gradient magnitude is the critical feature. The maximum value of this feature is then stored to form an optimum edge image: The process is completed when Ir(n i , n j ) has scanned the entire Cartesian image frame or predetermined regions of interest. A refined edge image is constructed by forming a parabolic curve through the adjacent pixels of the interest point in Eopt(x, y). The formation of the polynomial is illustrated in Figure 2B for Δ = 22.5 • . For coefficients a j : j = {1, 2, 3}, the curve is evaluated from Equation (4) to identify the parabolas' vertex, thus obtaining the edge magnitude and location: Hence, the refined gradient direction measure is simply a transformation of the parabola's peak location proportional to the predefined gradient interval Δ .

EXPERIMENTS AND ANALYSIS
This section begins with a brief discussion of imaging system noise to form a model of an operating environment typically encountered in optical systems. This model is used to evaluate the precision and accuracy of a detected edge's polynomial model. Accurate test patterns are then presented in order to appropriately analyze the impact of noninteger coefficient subpixel kernels and edge detection performance. Following the detection method outlined in Section 2.1, the first investigation establishes the influence of symmetry on the polynomial peak for one coarse estimation of an edge gradient direction. This analysis is based on 3 and 5-point sampling of the kernel interval for Δ = 22.5 • and Δ = 45 • . The second investigation repeats the detection process to refine the coarse estimate for 1 • < Δ < 45 • . For each gradient image, D (i, j), local maximums are extracted at each pixel. To isolate the gradient location of interest, the x and y-pixel coordinates of D (i, j) are obtained from the maximum locations of each Lmap(x i , y j , k ); for Ir(n i , n j ) n = 1. The proposed method is a combination of convolutions and polynomial fittings which will impact on the computational speed of the algorithm. Due to these mathematical functions, very large-scale images are not subject of this study. However, due to increases in computing speed and memory, and so on, the practical use of the algorithm is not foreseen to be an issue. The runtime of the proposed method is based on the time to compute the algorithm over one pixel, t px . Whereby, the runtime increases by (t px N) 2 ; N = m × n are the image dimensions. Furthermore, the observed runtime increased by an average of 34% when using 5-polynomial sampling points instead of 3-polynomial sampling points.

Noise model, sources of error
To reduce the effect of random noise located at higher spatial frequencies a low pass filter is applied to an image. This has the effect of blurring/softening an image by smoothing the effect of random and systematic variations toward the high frequency range of the bandlimited signal. Sources of error derive from approximating a derivative measurement and the effects of random noise. It may be required to enhance the features of interest in which case a high pass filter is applied.
Or it may be that the characteristics of the image are known, and a matched filter may be applied. These filtering methods are all signal dependent, but random error can be attributed to the sensor capturing the image. For the investigations in the proceeding sections let noise be additive and a time varying stochastic process Rnd: where Rnd denotes a random signal impacting the fundamental process of capturing an image. This could be of the environment or of the optical instrument receiving the signal information. The sensor can be either complimentary metal-oxide semiconductor (CMOS) or charge-coupled device (CCD), whereby each has its own unique formulation of Rnd. 29 First, let the captured image picture be the noise free image and Rnd be statistically modeled as noise affecting the pixel nature of u(x, y). Since additive noise is an image component, fluctuations in signal strength can be defined: u(x, y)Rnd(x, y). Generally, sources of additive noise are considered as thermal processes (hardware), light particle interaction (photon shot noise), and natural image statistics independently affecting the capture of spatial distribution and light intensity. Therefore, a bandlimited signal is dominated by white and low frequency noise distributions. These noise sources models are presented in brief in Appendix C.
Consider the normalized signal strength of a noise contaminated signal in the range 0 : 1, this signal can be scaled to the number of photons detected by the image sensor: ∝ √ N ℎ , for example in the range of 10 : 10 6 detectable photons. To vary the influences of low frequency noise, A(S(f ) ) −1 , where = 3dB and A is the signal amplitude, an amplitude matched signal is scaled between 1% and 100%.

Test pattern generation
Two separate test patterns are used for the analysis in Sections 3.3 and 3.4. The first pattern, presented in Figure 3A, is a 16-sided polygon which isolates gradient edges at 22.5 • intervals. This pattern is used to analyze the coarse detection of subpixel edge gradient directions. For a refined estimate of an edge gradient, the direction measurement has an increased sensitivity to the accuracy of the sample gradient direction. Therefore, using (y, x) and presented in Figure 3B-  (10) for each examined gradient are equal. By increasing the image resolution as the edge nears the horizontal, potential errors introduced by these parameters are removed. Whereby, the sensitivity of the proposed method to image resolution as an edge approaches the horizontal can be accurately modeled.

Coarse iteration, subpixel detection sensitivity
In this section, accuracy and precision of the proposed edge detection algorithm are analyzed for one coarse iteration. A coarse estimate of the edge gradient direction is analyzed with two fixed sampling intervals, = 22.5 • , 45 • . The test image that this analysis is performed on is that displayed in Figure 3A. The radius of the polygon at the intersect points is 40 pixels and each intersect point over the angular range is sampled 100 times to obtain an average measurement. Typically edge kernels are square and odd and for increasing kernel widths, radial symmetry is linear. The Sobel kernel K n :  n × n , where n = 3, in this investigation is generated by Equation (2)

Accuracy of fine gradient measurements
Since the step edges of the test functions increase for acute gradient orientations, the width of the feature causes the number of convolutions across the edge to increase. Thus, to appropriately sample the edges in the convolution for increases  in y the kernel size increases. The accuracy of the iterative method is determined by measuring the error across the coarse and second (fine) iteration for K n :  n × n ; n = 3, 5, 7, 9, 11, 15. For the analysis that follows using the test functions in Figure 3B-D, the gradient interval of the coarse iteration is confined to Δ = 22.5 • using 3-points to sample the polynomial. The second iteration is also evaluated with the interval Δ = 22.5 • using 3-points. The measurement is taken at a single point located at the center of the convolution and edge image. It is obvious that the width of the edge would increase along with K. This impacts noise reduction at a cost of feature resolution. However, to attain an accurate measurement across gradient directions the size of K becomes important for sensitive measurements. The accuracy of measuring the edge gradient image directions = 75.96 • , 71.96 • , 63.4 • are presented in Tables 1-3. These measurements are the coarse recovered gradient r(c) , the absolute error of the coarse recovered gradient | ( r(c) )| and reciprocation of these for the second iteration: r(f ) and | ( r(f ) )|. A third measurement recorded is K sr which tracks the sampling ratio across of K using the ratio of image width to kernel width.
For accuracy of the detected edge after the second iteration is ±0.04 • when (⋅, y) is even and equal to 2 and 4, whereas for the value of (⋅, y) equal to 3 the accuracy decreases to ±0.21 • . The decrease in accuracy is a result of the optimum kernel size for (x, y) equal to (3,1), where the kernel width would have to impossibly become a noninteger value; approximately 5.3. The proceeding analysis of precision for these edge directions is measured using the optimum K n that has been identified in Tables 1-3.  (5), of the curves' points (P pk ± P n ; n = 1, 2) for the interval 1 • < Δ < 45 • .

Refined subpixel detection
(P(Δθ)) = ‖ ‖ (P pk (Δ ) − P pk−n (Δ ))-(P pk (Δ ) − P pk+n (Δ )) ‖ ‖ So that the accuracy of the measurement remains <1 • , the systems' noise level operating range is restricted to √ N ℎ > 31. 6 and S(f ) < 10%. The tuning range is quantified as the SD of the symmetry range ( (Δ )). The results due to perturbations of the edge gradients from photon shot noise are in Tables 4 and 5, from low frequency noise.

DISCUSSION
In this section, implications of using additional points for edge reconstruction towards general application of the detection method are examined.  For a line distorted by noise, the accuracy of the line's reconstruction is improved by including more sampling points. For a curve which is symmetric, 3-points would reconstruct the curve perfectly revealing the location and value of the true vertex: P pk . Increasing the number of points would theoretically seem to increase the accuracy of the location and value of P pk , however this process was not observed in the noise analysis. The location of the gradient for the test image in Figure 3A for D ; 0 • , 22.5 • across Lmap(x i , y j , k ) for integer kernel coefficients is presented in Figure 6. The neighboring points P pk − 1: P pk + 1 used in the 3-point polynomial are the standard gradient kernels for P pk ( = 22.5 • ). Whereas for P pk ( = 0 • ), the neighboring points are the approximated subpixel gradient kernels. If two additional points P pk − 2: P pk + 2 are then factored into the edge reconstruction, the additional sample points are also either standard or approximated subpixel gradient kernels.
These observations suggest that the location and amount of distortion on the parabolic curve is due to the amount of noise at each subpixel gradient point. For the standard kernel definition Δ = 45 • , the average value across all points is fixed. The subpixel kernels, generated from Equation (2), have their polynomial peak value fixed by the kernel coefficient w ± (i, j). For noninteger coefficients, the kernel interval Δ = 22.5 • across Lmap(x i , y j , k ) is symmetric and the sampled point values do not change unless either w ± (i, j) varies or the system noise level increases. Hence, if P pk − 1, − 2 and P pk + 1, 2 are equal, there is no effect or apparent improvement to the accuracy of the gradient location for the extracted value of P Pk .

Coarse iteration
The analysis of a second-order polynomial to refine the coarse gradient estimate at a specified point in Section 3.3 reveals an operating range. The error trends in Figures 3 and 4, for both noise distributions, indicate the use of the edge detection algorithm to detect object feature boundaries for applications requiring a precision of <1 • error: √ N ℎ > 31.6 F I G U R E 6 Location and value of P pk − 1: P pk + 1 and P pk − 2: P pk + 2 of Lmap(x i , y j , k ) for D ; 0 • , 22.5 • and S(f ) < 10%. For Δ = 22.5 • using 3-sampling points the operating range increases to: √ N ℎ > 10 and S(f ) < 25% for multiples of 22.5 • edge gradient orientations. However, the analysis has revealed that the number of points used to sample the parabola only enhances the measurement of the vertex's position when the points are appropriately spaced and above the noise signal. Hence, the critical components of the method are equal sampling separation distance and symmetry.

Refined iteration
The effect of Δ on the symmetries SE is characterized against the gradient edge directions of the test functions in Figure 3B-D, and the configuration models (1-4) of the refined edge detection process (Sections 3. [4][5]. It is observed that the gradient directions of the test function become acuter as coordinate (⋅, y) increases. One effect of this increase to the formation of the kernels is that the symmetry of the parabola at the coarse estimate distorts to a function that can no longer be characterized by a second-order polynomial. In the case of evaluating = 75.96 • using 3-points, where P pk + 1 is greater than P pk , the edge function begins to be characterized by a cubic term. The result of this error causes the SE of the measurement to be proportional to the symmetry of the polynomial curve. The reasoning for this is that for Δ = 1 • , the distance between P pk ± P n ; n = 1,2 is smaller than it is for increases in Δ , thus the precision measurement of parabolic symmetry reduces as Δ increases. Hence, it is sensible to state that the SE (Δ ) increases as symmetry (P(Δ )) becomes poorer. The relationship between (Δ ) and ( (Δ )) is therefore inversely proportional: In the case of = 75.96 • , this relationship yields that there must be more than 3-sampling points in the coarse iteration to appropriately evaluate such an acute angle. This response is echoed for a function that is distorted by low frequency noise.
There is not a single optimum configuration of coarse-to-fine polynomial evaluation via a coarse to fine edge refinement. One example of this is the case of acute gradients distorting the parabola beyond a second-order term. System and signal noise present the amount of error you can expect based on the degree of symmetry in the polynomial and the gradient interval spacing of the fine iteration. Based on the identified operating range for the noise sources, Tables 4 and 5 demonstrate the consistency of the error ranges for each model configuration and edge direction. Across all combinations of the edge process, edge gradient direction and imposed noise levels, the calibration between gradient error and symmetry extends in range as the noise signal decreases. In exceptional conditions where the test function is well illuminated the precision measurement is of the order 10 −4 . For the generated edge gradient directions this precision figure of merit extends to higher noise level (10 −5 ) when the configuration of coarse to fine sampling of the polynomial is considered.
Depending on the sampling combination selected, the rotational detection accuracy of the proposed method reduces to 10 −5 . To compare this metric one parameter highlighted in the introduction was angular distortion in digital correlation accuracy. 25 In Reference 31 a registration accuracy of 10 −5 is demonstrated, but for a shift in translation only. In Reference 32 the authors demonstrate a third-order process in differential geometry to measure orientation, curvature, and position. Considering no scale distortion, their achieved accuracy, based on simple binary shapes (200 × 200 pixels) is typically of order 10 −2 ; image scale and edge blur were shown to reduce this accuracy. The accuracy of the proposed method outperforms higher-order differentiation to recover subpixel gradient information. In Reference 24, the authors characterize LED probe tips using two vertical lines and a curve obtained via a 2D image capture of the device. The process involves two stages of edge detection before an iterative subpixel feature measurement process based on the partial area effect. They seek to accurately measure the quality of these tips via the radius uniformity of the tip. The line segment lengths are towards the micron scale and the orientation measurement of the uniformity of the tip is with in error of ∼1% of a degree (0.01). Such a precision is more than sufficient for the target applications in References 24, 25.

Application of edge detection algorithm
Outdoor scenes in captured image frames typically exhibit nonrigid structure that may have characteristic edge features orientated at gradients in between the pixel resolution of the Cartesian map. Examples of such images are used in Figure 7 to demonstrate the application of the proposed subpixel edge detection approach. It is evident, that in practice, for real objects the performance of the edge detector will degrade. The subject matter within the example images, in Figure 7A,D,G, are three common background structures in outdoor scenes: fence, leaves and grass; each grayscale image is 100 × 100. The algorithm is set up to determine a coarse gradient image using a Δ = 22.5 • interval evaluated using F I G U R E 7 Application of proposed edge detector. Test images: fence, leaves, and grass: A, D, and G. Fine gradient estimate image: B, E, and H. Gradient intensity difference image between coarse and fine estimates: C, F, and I 3-polynomial sampling points, and a fine gradient estimate using a Δ = 22.5 • interval evaluated using 3-polynomial sampling points; for a 3 × 3 kernel. The difference between the coarse and fine estimate of the edge direction depicts the change in gradient intensity of the detected edges. The proposed methods ability to detect edges is demonstrated by comparing to the standard Sobel, Laplacian of Gaussian, and Canny algorithms for a series of MRI image slices. 31 Whereby, an important application of MRI image analysis is to track changes in biological tissue. Such tissue is typically irregular, perhaps low in contrast, and nonuniform making it a suitable sample feature for subpixel edge detection. The proposed method is as described for the examples in Figure 7. Using MATLAB edge function, the Canny method's lowpass filters spatial bandwidth is 2 and the low and high thresholds are automatically selected. The Sobel and Laplacian of Gaussian are also parameterized by an automatic setting.
An important distinction between the Canny and the Laplacian of Gaussian method with the proposed method, is that the gradient intensity is selected according to a statistical response (maximum in this case) within a local region, Ir(n i , n j ). This results in the detection of low contrast edges without requiring control of a filter to remove noisy high frequency components, which may also remove true edge information. Such that, when Ir(n i , n j ) is increased, edge accuracy degrades because the region acts as an optimum filter for a chosen parameter or feature. For example, the maximum value. For the implementation of the Sobel edge, Laplacian of Gaussian, and Canny edge detectors in Figure 8, a threshold value to decide whether an edge is noise or not is key. The proposed method scans the image map for the region's maxima which retains sharper contrast and greater continuity of higher magnitude edges. However, if the image histogram is multimodal this preprocessing step is not a simple issue to address.

SUMMARY
Due to the discrete nature of edge detection, edge precision is fixed by the resolution of the imaging equipment. Furthermore, the capabilities of preprocessing algorithms to extract precise edge direction feature information must also be considered. This article has reported on a simple implementation of arbitrary orientation edge detection kernels. Whereby, accurate subpixel orientation measurement is obtained by interpolating between 3 and 5-sampled points along a reconstructed edge: using a second-order polynomial. The accuracy of the proposed method is based on a polar mapping of the kernel coefficient weights. The proposed method is investigated for iteration between a coarse and refined estimate of an edge's orientation. For noninteger coefficients, lines of symmetry of the differentiator have been formulated and expected gradient errors have been parameterized against white and low frequency noise distributions; see Appendix C.
The main advantage of noninteger coefficients is that they eliminate the fixed Cartesian error of standard integer kernel weights. This is due to neighboring points of the correlation peak remaining equal; except for distortion to the symmetry of the parabolic model, Equation (5), which is established from the amount of system noise: Equation (6). For increases in kernel size, noise sensitivity is reduced at a cost in of the minimum detectable feature size. Whilst the definition of the standard isotropic kernels such as Sobel, Prewitt, Scharr and Kirsch are exact for 3 × 3 kernel sizes, the extension to higher kernel sizes is arbitrary so long as the kernel that approximates numerical differentiation follows these criteria: sum of weights is equal to zero and a line of symmetry exists along the derivatives direction. The key point of this investigation is that by using 3 and 5-sampling points to evaluate a polynomial over an interval of 22.5 • , the detection accuracy is dependent on the sampling interval used which is also dependent on the direction of the edge. Through four general steps, the simple implementation of the proposed method (from Section 2) to obtain an improvement in precision at subpixel gradient directions is as follows; whereby these points must be considered. (1) For 3-points precision is further enhanced, this is determined from fixed value gradient neighboring points and a w ± (i, j) dependency on P pk . (2) For pixel resolution gradient directions, precision is maximized using a standard 45 • interval and a 3-point polynomial evaluation.
(3) Due to neighboring subpixel points at pixel gradients, using 3 points to evaluate the polynomial over an interval of 22.5 • decreases precision. (4) For 5-points the improvement in precision is maintained at a level equal to that obtained for Δ = 45 • . (5) For subpixel neighboring points there is a dependency on w ± (i, j) and the noise signal level. However, the additional pixel gradient points are fixed.
In this report, the initial analysis of the proposed edge detector has revealed positive results, demonstration of the approach to two different image subjects has highlighted interesting application for this preliminary research. This study on subpixel edge reconstruction is not a definitive conclusion of the subject investigated, more detailed research is needed to consider edges at different orientations, shapes (roof-top and ramp functions), and closeness of contrast. These are under investigation for presentation in further publication. The developed method has been demonstrated under the ideal situation using step edges orientated down to an accuracy of order 10 −5 . A precision analysis of the effect of kernel sizes and sampling interval has initially quantified the limits of operation for the method in real optical systems. The demonstrated accuracies for the investigations in References 23-26 and32,33 have reinforced the potential application, of this initial study, in digital correlation and deformation measurement driven by planar features at the millimeter and micrometer scale. In comparison to the popular subpixel edge detection method, based on spatial moments, 5 the authors show a consistent measurement sensitivity of order 10 −6 perturbed by speckle and Gaussian noise with a runtime of 0.004 second for a 11 × 11 region. However, pixels defined as an edge can only be reliably associated to the edge in a small neighborhood and they do not reference the measured sensitivity to the position and orientation of an edge since they state that the true edge information is unknown. The runtime of the proposed method in this research, for one convolution between an image and a filter, over one pixel, t px , is (t px N) 2 ; this will increase for a wider range of filter bank in the coarse and refined iteration steps. Critically, the achieved sensitivity (10 −5 ) has been related to accurate modeling of an edge's gradient. The effect of increasing the number of points to extract the correlation peak location have being quantified and the consequence of symmetry distortion on the measurement sensitivity has being characterized. For incremental edge directions, measurements are consistent in achieving less than 1% error across low and equal noise levels. Such a precision is more than sufficient for the target applications in References 24, 25.
The proposed method will benefit search-based computer vision systems where hardware capabilities maybe limited and for enhanced preprocessing stages in machine learned edge detection. 34 For precise detection and tracking of characteristic edge features of nonrigid objects, particularly at the millimeter and micron scale; two types of implementation are identified. First, to estimate and refine edges gradient directions without prior knowledge of the sample boundary features. Second, for cases when prior knowledge is known, accurate detection and gradient estimates can be determined.

APPENDIX A-KERNEL MASK GENERATION, VECTOR GRADIENT
A 3 × 3 kernel with nine locations (a-i) depicts how a 3 × 3 region of an image is overlaid with the radial position of kernel gradient ( Figure A1).
The coordinate pairs as depicted in Figure A1 are (a,i), (c,g), (b,h), and (d,f). For the radial positions r1 = 0, r2 = 1 and r3 = √ 2 and collecting likewise terms in the kernel locations where r is the distance from the center of the kernel grid e, the projected gradient is The center component (e) is a zero vector and the common factors are the radii, hence the orthogonal gradient components are determined from (A2)

APPENDIX B-POLYNOMIAL FITTING
Polynomial fitting is an approximated approach based on the well-known reconstruction of the original function from a set of discretely sampled data. 35 For the pictures that are of concern, the reconstruction function will be a sinc function, which is the Fourier transform of the rectangular truncation function. Since we are only after a better estimate of the peak location, there is no need to perform the convolution, and only the central portion of the sinc function is used, hence fitting a second order polynomial. The number of points used for the fitting depends on the resolution of the function and whether any zero padding of the input has been used. It is important to make sure that points used for the fitting are all inside the main lobe surrounding the peak, or errors will be resulted. This points to use three points for the fitting, and if the original data have been zero padded, five points may be used.
which increases as f decreases. The term dB is a definition of the power spectral density of the noise over the bandwidth of the signal. The amplitude of S(f ) is determined from the inputs signal strength: A = 1.