Convenient formulas for quantization efficiency

Authors


Abstract

[1] Digital quantization of signals prior to processing results in the insertion of a component of noise resulting from the finite number of quantization levels. In radio astronomy, for example, this is important because the number of levels tends to be limited by increasing sample rates, required by the use of increasingly wide bandwidths. We are here concerned with signals with Gaussian amplitude distribution that are processed by cross correlation. Quantization efficiency is the relative loss in signal-to-noise ratio resulting from the quantization process. We provide a method of calculating the quantization efficiency for any number of uniformly spaced levels, as a function of the level spacing, using formulas that are easily evaluated with commonly used mathematical programs. This enables a choice of level spacing to maximize sensitivity or to provide a compromise between the sensitivity and the voltage range of the input waveform.

1. Introduction

[2] The process of digital quantization of an analog waveform involves addition of an error component resulting from the finite number of bits in the digital representation. There is thus an increase in the uncertainty of any measurement using the digitized waveform, and hence a degradation of the signal-to-noise ratio (sensitivity) of the measurement. An early general analysis of quantization effects is given by Bennett [1948]. In radio astronomy, for example, cross correlations are formed of the received waveforms from spaced antennas or from a single antenna with different time delays. The probability distribution of the analog waveforms is close to Gaussian and the signal-to-noise ratio of the digitally correlated data, as a fraction of that for an equivalent ideal analog system, is referred to as the quantization efficiency ηQ. Expressions for ηQ for three- and four-level quantization in radio astronomy were first given by Cooper [1970]: for later derivations, see, e.g., the discussion by Thompson et al. [2001] and associated references. With advances in digital electronics, use of larger numbers of levels has become practical, and some analysis of performance in such cases is given by Jenet and Anderson [1998]. In deriving general expressions for ηQ we have used an approach which is based on the consideration that the quantization efficiency is equal to the variance of the original analog noise voltage divided by the equivalent noise variance of the digitized signal at one input of a correlator. Note that this applies to the situation in which the cross correlation of the signals at the two correlator inputs tends toward zero, which is generally the important case in radio astronomy. For this condition, we derive exact expressions for any number of levels using formulas that can easily be evaluated using widely available mathematical programs. Approximate expressions for ηQ for eight and higher numbers of levels are given by Thompson et al. [2001, pp. 273–276]. However, the quantization inequality (x-y) is used as an approximation for the quantization noise; that is, in effect α (see section 2) is taken to be 1. This is a useful approximation if the number of quantization levels is not too small, but it is now possible to provide exact expressions by using α1 to select the random component.

2. Derivation of the Formulas

[3] Let x represent the voltage of the signal at the quantizer input. In radio astronomy such waveforms generally have a Gaussian probability distribution with variance σ2. Let y represent the quantized values of x. The difference x-y represents an inequality introduced by the quantization. The inequality contains a component that is correlated with x, and an uncorrelated component that behaves as random noise. To separate these, consider the correlation coefficient between x and Δ = xαy, where α is a scaling factor. The correlation coefficient is

equation image

[4] Here the angle brackets 〈 〉 indicate the mean value. If α = 〈x2〉/〈xy〉, then the correlation coefficient is zero, and Δ represents purely random noise. We refer to this random component as the quantization noise, q, equal to xα1y where α1 = 〈x2〉/〈xy〉. Note that for each sample, x and y have the same sign so xy is always positive. Without loss of generality, we take σ2 = 〈x2〉 = 1 and use α1 = 1/〈xy〉. Thus the variance of the quantization noise is

equation image

and since the total variance of the digitized signal is 1 + 〈q2〉, we obtain

equation image

[5] As a verification of this result, consider the case of two-level quantization, which was particularly important in early radio astronomy correlators. Here y is assigned the value of 1 when x > 0, and −1 when x < 0. Thus 〈y2〉 = 1 and 〈xy〉 = 〈∣x∣〉. Then we have

equation image

and from equation (3)ηQ = 2/π, which is a well-known result that follows from a study by Van Vleck and Middleton [1966]. These authors also give the correction for linearity that can be applied to the autocorrelation or cross-correlation terms produced after quantization with two levels, and similar corrections for larger numbers of levels have been derived elsewhere. These corrections will scale the RMS level by the same factor as the signal within a given correlation term, so the resultant signal-to-noise ratio is unaffected by the linearity correction. An interesting historical detail concerning this reference is that the work was done during World War II and described in Radio Research Laboratory Report 51 of Harvard University, dated 1943, at which time it was classified.

[6] To apply equation (3) to cases with larger numbers of levels we need general expressions for equation imagexy〉 and 〈y2〉. Values of xy and y2 are determined by the sample values of x, so the mean values over many samples can be expressed in terms of the Gaussian probability function of x. We consider only cases in which the spacing between adjacent quantization thresholds is constant, and begin with even numbers of levels as in the 8-level case in Figure 1. We Define ε (measured in units of σ) as the spacing in the x coordinate between adjacent level thresholds, and equation image as half the number of levels. We first determine 〈xy〉. The values of x that fall within the quantization level between and (m + 1)ε are assigned the value y = (m + 1/2)ε. (Since the digitized values are specified in units of ε, choice of ε introduces a gain factor, but this does not affect the signal-to-noise ratios with which we are concerned.) The contribution to 〈xy〉 from this level is

equation image
Figure 1.

Examples of quantization characteristics with (top) an even number of levels (eight) and (bottom) an odd number of levels (nine). In each case, equation image = 4. Units on both axes are equal to ε. The vertical sections of the staircase functions represent the thresholds between the levels. Note that for even numbers of levels, the thresholds occur at integral values on the abscissa, whereas for odd numbers of levels, the thresholds occur at values that are an integer ±equation image.

[7] The contribution from the level between − and −(m + 1)ε is the same as the expression above, so to obtain 〈xy〉 we sum the integrals for the positive levels and include a factor of two:

equation image

[8] The summation term contains one integral for each positive quantization level except the highest one. The integral on the lower line covers the range of x above the highest threshold, for which the assigned value is y = (equation image − 1/2)equation image. Then since equation imagexex/2dx = −ex/2, equation (6) reduces to

equation image

[9] To evaluate the variance of y, again consider first the contribution from values of x that fall between and (m + 1)ε. The variance of y for all values of x within this level is

equation image

[10] For negative x we again include a factor of 2, sum over all positive quantization levels below the highest threshold, and add a term for the range of x above the highest threshold. Thus the total variance of y is

equation image

[11] The right-hand side of equation (9) can be simplified by expressing the integrals in terms of the error function erf(): erf(ξ/equation image) = equation image exp(−t2/2) dt. Then, using equations (3) and (7), we obtain

equation image

[12] For the case where the number of levels is odd the thresholds occur at values that are an integer ± equation image , as in the 9-level case in Figure 1. Values of x that fall within the quantization level between (mequation image)ε and (m + equation image)ε are assigned the quantized value . We represent the odd level number by 2 equation image + 1. Then following the steps as outlined for the even-number levels we obtain

equation image

3. Results

[13] For even and odd numbers of levels equations (10) and (11), respectively, provide values of ηQ from starting values of equation image and equation image. They can be evaluated rapidly in Mathcad, Mathmatica or similar programs. Examples of results derived are shown in Table 1. Values of ε are in units of σ and are chosen empirically to maximize ηQ. Curves showing ηQ as a function of ε are shown in Figure 2. As ε → 0 the output of the quantizer depends only on the sign of the input, so the curves meet the ordinate axis at the two-level value of ηQ, 2/π. As ε increases, more of the higher (positive and negative) levels contain only values in the extended tails of the Gaussian distribution, so the number of levels that make a significant contribution to the output decreases, and the curves merge together. The curves for even level numbers move asymptotically to the two-level value, and curves for odd level numbers move toward zero. In situations where there are different numbers of levels used for the two inputs of a correlator, the output signal-to-noise ratio is proportional to the geometric mean of the input signal-to-noise ratios, i.e., to the geometric mean of the quantization efficiencies.

Figure 2.

Quantization efficiency as a function of threshold spacing in units of σ. The curves are for 64-level (solid curve), 16-level (long-dashed curve), 9-level (short-dashed curve), and 4-level (long-and-short-dashed curve) quantization. As ε → 0, the output of the quantizer depends only on the sign of the input, so the curves meet the ordinate axis at the two-level value of ηQ, 2/π. As ε increases, more of the higher (positive and negative) levels contain only values in the extended tails of the Gaussian distribution, so the number of levels that make a significant contribution to the output decreases, and the curves merge together. The curves for even level numbers move asymptotically to the two-level value, and curves for odd level numbers move toward zero.

Table 1. Values of equation image and ηQ for Several Numbers of Levels
Number of Levelsequation imageεηQ
2  0.636620
311.2240.809826
420.9950.881154
840.5860.962560
940.5340.969304
1680.3350.988457
32160.1880.996505
64320.1040.998960
128640.05730.999696
2561280.03120.999912

[14] If the constant voltage spacing between adjacent thresholds for both input and output values is not maintained, the individual levels can be adjusted to obtain an improvement in ηQ of a few tenths of a percent, decreasing with increasing number of levels. Level values optimized in this way are given by Jenet and Anderson [1998] for several numbers of levels. A highly detailed analysis of quantization effects which also includes threshold optimization is in preparation (F. R. Schwab, Optimal quantization functions for multi-level digital correlators, manuscript in preparation, 2007).

[15] In the analysis by Jenet and Anderson [1998] the assigned value for a signal that falls between adjacent level thresholds is equal to the RMS value of the corresponding Gaussian distribution between these thresholds. In the present case we want to be able to maintain linearity of response over voltage ranges which include non-Gaussian interfering signals (see section 4), and have used assigned values that are the mean of the threshold values between which the input voltage falls. This is also generally applicable to commercially available digital quantizers. Jenet and Anderson adjust the quantization parameters to minimize the RMS difference between the unquantized and quantized values of the input waveforms, whereas we have adjusted the spacing between thresholds, ε, to maximize the quantization efficiency. Differences in the results, however, are small and comparison of ηQ values in Table 1 with corresponding values by Jenet and Anderson shows that our value for 4 levels is 1.9% higher, but in other cases differences are only in the fourth or higher decimal places. Jenet and Anderson list values of a parameter l which is equal to 1 – ηQ.

4. Choice of Level-Threshold Spacing

[16] Often the requirement for calculation of the quantization efficiency is simply to find the value of ε that provides the maximum sensitivity for a particular number of levels. In recent systems, however, ε is sometimes chosen so that signal voltages much higher than the RMS system noise can be accommodated within the range of the quantizer. This preserves an essentially linear response to interfering signals so that they can subsequently be mitigated by filtering or other processes. For example, with 256 levels (8-bit representation) and ε = 0.0312 to maximize sensitivity, ±128 levels corresponds to ±4σ, i.e., 6 dB above the RMS system level. However with ε = 0.25, equation (10) shows that ηQ = 0.9948 and ±128 levels then corresponds to 30 dB above the RMS level. Thus with 256 levels, a sacrifice of 0.5% in signal-to-noise ratio can permit an increase of 24 dB in the headroom above the interference-free power level. Such an arrangement is particularly useful at the lower frequencies used in radio astronomy observations where interference is common and bandwidths used are narrower allowing larger numbers of levels without incurring undesirably high bit rates.

Acknowledgments

[17] The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

Ancillary

Advertisement