Radio Science

The use of reflected GPS signals to retrieve ocean surface wind speeds in tropical cyclones

Authors


Abstract

Since the first intentional acquisition of GPS signals reflected from water bodies, one of the objectives which has driven the research is to determine whether the acquired signal can provide useful geophysical information about the reflecting surface. One obvious condition of considerable interest is ocean surface wind speed. Theory suggested that the reflection technique, a form of bistatic RADAR, would be sensitive to surface roughness which in turn is driven by wind speed. This paper reports the results derived from data acquired over the past decade of applying the GPS reflection technique to ocean surface winds, particularly ocean surface winds in tropical cyclones. Examples of wind speed retrievals will be given for some illustrative cases of hurricanes and tropical storms. The results from several hurricanes and tropical storms on how the signal was calibrated will be presented. In addition, a quantitative comparison will be given between dropsondes deployed by NOAA during the storms and GPS reflection derived wind speeds taken at the same time. Conditions in which the GPS technique offers excellent comparisons as well as examples where the comparison is not so good will be presented. Suggestions will be given as to when the GPS technique can be used with confidence and when it is likely to be at variance with other methods.

1 Introduction

While signals reflected from the ocean were intentionally detected in 1996 [Katzberg and Garrison, 1996], no means was available at that time to store useful data for subsequent analysis. In 1997, a GEC Plessey receiver was successfully adapted at NASA's Langley Research Center and flown over the Chesapeake Bay, collecting and storing reflected GPS signals [Garrison et al., 1998]. A year later the relationship between the range coded signals and the ocean surface slope probability density was identified along with the relationship to surface wind speed [Lin et al., 1999]. With the basic relationship between wind speed, ocean surface slopes and reflected GPS signals identified, a number of flight experiments were done including under-flights of TOPEX/Poseidon in 1998 and support for the EOPACE 1999 field campaign [Garrison et al., 2002]. Also in 1998, an aircraft from NASA Langley transported one of the GPS receivers into the outer boundaries of Hurricane Bonnie off the North Carolina Outer Banks. Hurricane Bonnie was a category 2/3 on the Saffir-Simpson scale, however, aircraft safety restrictions prohibited flight into the center of the storm. In general, up to that time, no data from winds greater than 10–12 m/s had been acquired.

In the year 2000, the NOAA Aircraft Operations Center in MacDill AFB, Florida installed a second flight version of the GPS reflection receiver in one of the Hurricane Hunters to assist in the evaluation of the GPS ocean surface winds technique. Flight data were acquired during circumnavigation of Hurricanes Debby, Gordon, Keith, Leslie, and a penetration into Hurricane Michael. Wind speed retrievals from Michael were the first done using GPS signals reflected from the ocean inside a tropical cyclone [Katzberg et al., 2001]. While the receiver is fully autonomous, parameters must be set before flight and are not changed in-flight. No data were acquired in 2001 while the receiver was upgraded and parameters set to reflect lessons learned during the 2000 storm season. In 2002, data were again acquired, followed by flights in 2003, 2004, and subsequent years including 2012. Consequently starting in the year 2000, the GPS surface reflection receiver, installed on one or another of the NOAA Hurricane Hunters, has been able to create a database of wind speeds that include those considerably above the 10–12 m/s.

The retrievals in use during the earlier years of data acquisition were based on a Gramm-Charlier slope probability density [Cox and Munk, 1954] with an associated linear mean-square-slope relationship on wind speed. A simple correction based on buoy over-flight calibrations was used to account for the difference between the optical wavelengths used by Cox and Munk and the longer L band wavelengths and applied to their mean square slope (m.s.s.)-wind relationship.

Examination of results from data acquired inside tropical cyclones showed that the wind speed retrievals were not giving values consistent with other methods such as surface extrapolated flight level winds or available buoys [Katzberg et al., 2006]. Generally above 15–20 m/s, wind speeds appeared to be consistently underestimated. An effort was undertaken to determine whether the low wind speeds resulted in an inherent saturation of the surface mean square slopes with wind speed, or whether the assumption of a linear m.s.s. was incorrect.

In 2006 a comparison with the U.S. Navy COAMPS (Coupled Ocean Atmosphere Meso-scale Prediction System) was completed [Katzberg et al., 2006], which demonstrated that the surface mean square slope did not saturate for wind speeds up to at least 40 m/s. Although the relationship between the mean-square-slope and wind speed becomes nonlinear, a monotonic relationship exists to at least 40 m/s. A curve-fit was also developed to provide the required correction for use for high wind speed retrievals. The already existing data sets were reevaluated and subsequent ones processed with this correction function.

In 2009, data from a few storms were used to compare with NOAA dropsondes which are deployed during penetrations into tropical cyclones [Katzberg and Dunion, 2009]. While some results from some storms gave anomalous results, the GPS wind speed retrievals by and large gave good results using the newly developed calibration.

This paper presents a summary of ten years of GPS surface reflection wind speed retrievals acquired from penetrations and circumnavigations of several tropical cyclones. Results from the tropical cyclone retrievals will be compared to dropsondes, which are considered the standard method used by NOAA to determine vertical wind speed profiles in tropical cyclones. Anomalous behavior will be identified, and possible reasons giving rise to these deviations will be discussed, including possible effects of wind direction.

The paper is organized so as to present a basic summary of the technique including some definitions of geometry and idealized behavior of the receiver. The matched filter wind speed retrieval technique is described as is the corrected slope probability density, how this correction for the slope probability density was developed and how the model waveforms are generated. Impacts on the retrieval process of thermal and fading noise are discussed, and a model for predicting wind speed error arising from the noise is given. A section of the paper is devoted to identifying situations in which the GPS reflection technique cannot be expected to produce reliable results and led to editing these from the data set. Practical considerations related to implementing the matched filter technique in discussing two basic algorithms have been used. Results from over ten years of data acquisition and retrievals are presented to show the level of performance achieved so far for wind speeds from a few meters per second to well over 40 m/s. A more complete discussion of the geometry and some approximations or simplifications are given in Appendix A, while Appendix B shows how using the sampled data of the receiver is still very close to an ideal matched filter and Appendix C contains an approach to estimate the wind speed error arising from thermal and fading noise which contaminate the measurement.

2 Description of the GPS Surface Reflection Technique for Ocean Measurements

The configuration for the discussion which follows is illustrated in Figure 1. The GPS satellites form a constellation in space, which is designed to always have in view a sufficient number to permit a three-dimensional position determination by the user. Details of the system can be found in various sources, such as Parkinson et al. [1996]. For what is to be presented here, it is sufficient to note some of the salient characteristics of the signals, which can be readily utilized by the general public. The civilian-accessible code called “C/A” is transmitted at 1.575 GHz by each satellite.

Figure 1.

Geometry for surface reflection showing specular point and three ellipses of constant delay δ3 > δ2 > δ1 > δ0. Signal is acquired via an upward directed antenna and an oppositely polarized downward oriented antenna.

The signal coding is made up of a pseudo-noise code with a period of 1.0 ms. There are 1023 possible “chips” (0.977517 µs duration) where a plus one or minus signal modulation value can be found in each of the 1023 possible slots, depending on the transmitted code impressed on the signal. There are several of these codes each with the characteristic that cross-correlating one with another or the same one with at least one chip time shift will average to a number close to zero. Identical codes that overlap will add to 1023 times the value for one in-phase chip product. A partial overlap of identical codes gives a result proportional to the amount of overlap. The codes are referred to as “orthogonal” because of this characteristic, and it is thus possible for each GPS satellite to transmit on the same frequency but still provide separable signals. Information can be transmitted with the fixed codes by simply changing the sign of a whole group of 1.0 ms code chips and then detecting this slow, “super-modulation” as a data stream.

Even in the case of a matching code, if there is a relative delay less than one full chip, there will be a partial response between the shifted replicas. Given the constant plus or minus value of modulation for each code chip, the integrated response from the 1023 chips results in a triangular shaped resultant value as a function of relative delay. This response is referred to as a “Lambda function” from its resemblance to the Greek capital letter lambda. The width of this function is ±Tc, Tc being one chip time or 0.977517 µs. When projected onto the surface, this function represents a sensitivity profile or “aperture function,” which limits the smallest spatial detail that can be sensed by this technique.

Since the power from the surface is of interest, this signal is normally squared in the receiver signal processing, causing the sensitivity profile to take the form of a “Lambda-squared” function:

display math(1)

In this equation, τ and τ0 are delays of the replica code and actual delay, respectively. Other than the effect of narrow band pseudo-noise modulation, the GPS transmitted signal may be considered monochromatic with a wavelength of 0.1905 m. As a result of the very large distance from the GPS satellite, the form of the electromagnetic field at the earth's surface can be thought of as a plane wave when the platform is an aircraft.

When the signal meets the surface, it appears to be reflected generally along the specular reflection ray. If the surface is completely flat and mirror-like, the signal appears to come directly from the satellite, but located under the surface along a line making an angle equal to the elevation angle to the satellite from the surface. The bulk delay via the specular point Δ for a receiver at height h above the plane locally tangent to the earth's surface and with a satellite of elevation γ is given as follows (from Appendix A):

display math(2)

When the surface becomes rough, signal can be expected to come from areas away from the specular point. Facets on the ocean surface could have the correct orientation to redirect signal to the antenna, depending on surface conditions The locus of points on the surface which correspond to a fixed delay greater than that at the specular point can be described as a family of (nearly) concentric ellipses [Beckmann and Spizzichino 1963]. Setting the starting time of the internal receiver code sequence to any value greater than the delay for satellite-to-specular-point-to-receiver can be seen to define one of the ellipses on the surface as well as the maximum value of the sensitivity pattern of the “Lambda”-squared function.

The effect on a nearly plane wave of a surface of random height irregularities can be found in Beckmann and Spizzichino [1963], among others. Beckmann showed that the reflected signal from a random surface can be described by an integral over the reflecting surface of a Gaussian slope probability density dependent on the tangents (slopes) of the local surface normal vectors. Barrick [1968], extended the validity to any slope probability density function that provides an important generalization. Some relationships describing the scattering geometry and other mathematical details can be found in Appendix A that are used to develop the wind speed retrieval technique used to produce the results reported here.

Signal power recorded at any delay includes response up to ±1/TC around the delay center, corresponding to an annulus (or elliptical disk, near the specular point) projected onto the surface. The slope probability function within each annulus may be thought of as that fraction of scatters so oriented as to be able to reflect signal to the antenna with similar delay. The total signal from an individual annulus is the integral over the surface contained within that annulus at a particular delay with a Lambda-squared function as a sensitivity profile.

Other parameters such as Doppler shift, antenna gain, etc. can also affect the detected signal. A more extensive discussion of these effects can be found in Zavorotny and Voronovich [2000].

3 Matched Filter Wind Speed Retrievals

Two methods are typically used to make wind speed retrievals from GPS observations. The first relies on a statistical estimation approach built around model waveforms [Garrison et al., 2002], and the second is a direct matched filter approach that also uses sets of model waveforms [Katzberg and Garrison, 2000]. The second method is the one used in the retrievals reported here.

The retrieval method used to process the data and reported in this paper is a form of matched filter [see, for example, Carlson, 1968]. The incoming signal (plus noise) is cross-correlated with a set of model waveforms, one for each wind speed and corresponding to the particular experiment conditions. The wind speed corresponding to the model waveform which has the highest response is then identified as the retrieved wind speed.

Receiver considerations cause the particular form of this matched filter implementation to be samples at one-half code chip steps that are cross-correlated with model waveforms sampled at equivalent steps corresponding to an expected range of wind speeds. Errors in exact delay matching are corrected by creating model waveforms that are “vernier-ed” in one hundredth of a chip steps. For each wind speed in the model waveform group, the test waveforms are “swept” for an individual “best match” as well as tested at each wind speed. A discussion of the effect of sampling the waveforms is given in Appendix B and shown to be equivalent to continuous signal matched filter signal processing.

3.1 The Model Waveforms

Creating the model waveforms involves integrating the slope probability density over the surface, while incorporating the Lambda-squared dependence as a function of position in each annulus. Other effects such as antenna gain pattern do not affect the response here, since hemispherical response antennas are used to receive signals.

The integral over the surface can be converted from a polar coordinate system, which considerably simplifies the calculations. From the relationships of the remote sensing geometry in Appendix A, the semi-major and semi-minor axes of the ellipse of constant delay are as follows:

display math(3)

If the coordinate system is changed along the x-axis by replacing x by x ⋅ sin(γ), then the ellipses become circles. The coordinates become an integral with dA = r drdinline image and another change of coordinates (where rdr is now identified as d(r2/2)) gives

display math(4)

for a differential area on the surface, where dδ is differential time delay times the speed of light: cdτ. As shown in Appendix A, the above changes of variable convert a bivariate Gaussian slope probability density (spdf), approximately, into an exponential dependent on the delay in the receiver.

3.2 Slope Probability Density, Calibrations and Corrections

To use the matched filter technique, both an spdf and the relationship between that spdf and wind speed must be defined. The starting point for the spdf selection was that of Cox and Munk [Lin et al., 1999]. The Cox and Munk spdf is essentially a bivariate Gaussian function with polynomial correction terms (i.e., a Gramm-Charlier probability density.)

The linear relationship between mean square slope (m.s.s.) and wind speed of Cox and Munk used as an argument to the spdf. The mean-square-slope dependence on wind speed was modified in view of the fact that the Cox and Munk data were based upon an optical technique, while the GPS data is microwave taken at 1.575 GHz (19.05 cm wavelength.) As suggested by Wilheit [1979], the mean square slopes at microwave frequencies should be reduced to account for the longer wavelengths involved. Originally, the L band value of 0.33 from the Wilheit model was used. After some initial flights and comparisons with buoys, it was found that the value 0.33 was too much of a reduction. A value for the scale factor which gave a better fit was determined to be approximately a 0.45 reduction in mean square slope at any particular wind speed. The validity of the adjusted relationship was tested against buoy or other wind speed reporting stations covering a range of wind speeds up to 10–12 m/s.

As data began to be acquired at wind speeds above 10-12 m/s, it was determined that the wind speed retrievals were not producing wind speeds consistent with surface extrapolated flight level winds or available buoys. Even with the L band correction, high wind speeds in tropical cyclones, the GPS retrievals were underestimating the wind speeds. The reason for this underestimation was believed to be in the (modified) Cox and Munk linear m.s.s. relationship. Possibilities considered were: First, that the linear Cox and Munk relationship was not valid much above 10 m/s. Second, it was possible that the mean square slopes experience saturation effects at high wind speeds or even become multivalued. Saturation effects in the mean square slope such as these have been seen in scatterometers for wind speeds above 45 m/s [Carswell et al., 2000].

3.3 COAMPS Calibration

The NOAA hurricane hunter aircraft typically fly over the open ocean as they approach tropical storms, and a wind field of increasing strength passes beneath the aircraft. Such flight paths provide excellent opportunities to develop a GPS-based wind-speed-versus-surface-truth calibration. In essence, the flight path (time or space) provides a parametric variable over which the various values of wind speed can be “mapped” against GPS-derived values. While the NOAA AOC aircraft deploy dropsondes, these are typically used only in and near the tropical storm leaving the path to and from the storm without wind speed measurements. The most obvious alternative is model-based wind fields from weather prediction systems. The one used for this study is the U.S. Navy's Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). COAMPS “represents state-of-the-art analysis (including the Nowcast capability) and short-term (up to 72 h) forecast tools applicable for any given region of the Earth in both the atmosphere and ocean” [Chen et al., 2003]. With its 0.2° grid and root-mean-square accuracies of 2 m/s [Hsu et al., 2002], COAMPS provided sufficient density to compare with the high spatial resolution GPS data.

GPS-retrieved wind speeds from two storms, Rita 2005 and Ophelia 2005, were used for the calibration data sets. The closest 0.2 grid point was selected and the reflected data near that point was averaged. The GPS wind speeds were generated by the modified Cox and Munk linear m.s.s. versus wind speed function. The calibration entailed generating a correction function for the linear relationship. The results showed a “bending” of the m.s.s. relationship but no saturation for wind speeds above 35 m/s. Retrievals in this paper use the following version of the matching function [Katzberg et al., 2006]:

display math(5)

The function f (U) is defined as

display math(6)

where U is the to-be-determined wind speed and mss is the mean square slope used to define the model waveforms with || and ⊥, which refer to the directions parallel to the wind and perpendicular to the wind, respectively.

The integration of the slope probability density over the surface appropriate to the GPS satellite elevation angle and convolution of the Lambda-squared function scaled for the aircraft altitude gives the model waveforms. These are generated for each expected wind speed in steps and stored for cross correlation with the acquired surface data.

An example of the result of performing such integrations is shown in Figure 2 in terms of code delay for three wind speeds.

Figure 2.

Example of model waveforms generated for increasing wind speed, as a function of code chip delay.

After implementing the COAMPS-based correction, much higher wind speeds were retrieved, and those wind speeds were at much higher values than before, consistent with typically reported tropical cyclone winds. An example of the improved results is shown in Figure 3 for Hurricane Rina in October of 2011. It can be seen that the wind speeds reach as much as 40 m/s (80 knots) and that the typical vortex-like pattern is reproduced. The right side of the figure shows the track flown by the aircraft from the position solution generated by the same GPS receiver itself, greatly aiding geo-referencing of the data.

Figure 3.

Typical data obtained from storm penetrations: (a) wind speed as a function of GPS seconds of the week and (b) the associated flight track.

4 Noise Effects on Wind Speed Retrieval

Two major noise effects impact the accuracy of the matched filter retrieval process: Fundamental thermal noise, and reflected signal fading noise. The thermal noise arises from antenna sources, noise added by the antenna preamplifier and lesser contributions from subsequent amplification, frequency mixers, etc. Fading noise results from the summation of randomly phased signal components occurring at the antenna.

4.1 Basic Signal and Noise Levels

While GPS details are classified, open literature specifications [see Parkinson et al., 1996] state that the C/A code power level must be such that the output from a hemispherical antenna (3 dB gain) must be greater than −160 dBW. Ideal thermal or “kT” noise would be −204 dBW/Hz over the entire GPS bandwidth.

The incoming signal in the receiver is down-converted and cross-correlated with the perfectly matched C/A code to produce a constant signal power. For the receiver operation employed here, the signal and noise are coherently summed over the 1.0 ms period of the C/A code. This summing represents a “running mean” integration of the noise and can be shown to yield a noise-equivalent bandwidth of 1 kHz. The down conversion of the kT noise produces the same noise power density as found at the antenna-amplifier (scaled by RF channel gain, as is the signal.) The 1 ms cross correlation and summing in the receiver yields a constant signal power (perfectly matched signal codes) and a 1 kHz bandwidth (+30 dB-Hz), thermal noise power, (strength -204 W dB.) The front-end amplifier typically adds noise of +3 dB giving a total noise power of −171 dBW, compared to −160 dBW for the signal ignoring channel gain, which would add to both equally. The basic signal power to noise power ratio from the unmodified incoming GPS signal is then approximately 11 dB.

The process of squaring the signal to get a power-versus-delay measurement also yields a squared noise, which has an average value and fluctuating component. Thermal noise can be modeled as a Gaussian random process, which after squaring, gives a constant term and a fluctuating term [Papoulis, 1965]: The constant term is the integral over the bandwidth of the noise power spectral density in that channel. The root mean square of the fluctuating term is equal to √2 times the mean value. Thus, the thermal noise on a per-1-ms-sample basis has a “signal-to-noise” of 1/√2. Since nonoverlapping 1 ms integrations generate independent samples from the front-end noise, the signal-to-noise can be improved by incoherent averaging.

For the receiver implementation reported here, the signal is delayed to compensate for the platform altitude, plus one to two half-code chips more, so that the first data bin can be used as a noise monitor. This data in this bin is smoothed and used to subtract from the other “range bins,” leaving only signal and the fluctuating component that contaminates each signal sample and disturbs the matching process.

Multiplication-and-adding in the receiver of the reflected signal with the matching PRN code generates a value that depends on the instantaneous signal phasor summation at the antenna. This noisy signal is described as “fading” noise [Ulaby et al., 1986] whose components arise from in-phase and quadrature components of the received signal. Squaring the in-phase and quadrature phasor components produces an exponential distribution in amplitude. This form of noise gives rise to a per-sample signal-to-noise of unity. Improvement can come from averaging of uncorrelated samples.

During the cross correlation between the reference code and the signal, the ocean surface must remain relatively constant to permit a coherent summation to occur in the receiver. Times over which the sea surface may be considered “frozen” are typically taken to be 6–8 ms. This limits the coherent integration time and subjects the accumulation of coherent signal to an upper limit. The processing done to obtain wind speed assumes that the surface does not change appreciably from one “C/A” code integration to the next. On the other hand, this coherence of the surface sets the time interval between samples that could be considered uncorrelated. This means that in a 1 s period, 120–160 or less independent samples could be expected, noticeably less than the number of independent samples for the thermal noise.

4.2 Effect of the Matched Filtering Process on Wind Speed Retrievals

The noise associated with each measurement from the surface causes an uncertainty in the selection of maximum cross correlation and apparent wind speed. The probability of selecting a waveform with greater response than the ideal one is the probability that some waveform with a lower peak response plus noise is greater than the correct waveform. By expanding the Schwartz inequality used for the waveform testing in terms of first order in detected noise, the error probability can be bounded by utilizing the Tchebycheff or Vysochanskii-Petunin inequality applied to this approximation (see Appendix C):

display math(7)

The probability that an erroneous wind speed will have the “top score” among rival choices is that the sampled actual data plus noise will exceed the effect of the perfectly matching waveform plus noise. In effect, the noise samples could create a bogus waveform which correlates better with a “wrong” model waveform and give it a top score.

Applying this relationship to an example of 500:1 signal power to total thermal (kT) noise with five delay samples per waveform and 5 s averaging (averaging 5000, 1 ms-integrated, incoherent samples) gives approximately a 5.0% chance of a wind speed error of more than 10% low and 6.3% that the wind speed error is greater than 10% high. This may be a conservative number, since some probability densities are bounded below the Tchebycheff limit. Nevertheless, the relationship gives a useful guide in view of the otherwise intractable problem in determining the actual probability density and appears reasonably representative for the measurements reported here.

5 Practical Implementation of GPS Receiver and Algorithms

5.1 Receiver Considerations

Only one receiver implementation has been used to acquire the data sets reported here. Based on a development kit, the receiver was first demonstrated and flown in 1997 at NASA-Langley Research Center. Details of this receiver can be found in Garrison et al. [2002]. There are two modes for this receiver: (1) a single satellite reflection mode and (2) a six satellite, lower signal-to-noise mode.

In the single satellite mode, five of the channels in the receiver are assigned to operate as a normal GPS receiver, while the remaining seven correlator pairs of the receiver are assigned to detect the reflected signal from a selected satellite. Each of the correlators has a fixed delay from the previous, resulting in a staggered sampling by delay interval of the reflected signal. The detected signals are passed to the host computer and squared to give a “power” measurement.

The PRN codes of the receiver repeat every millisecond so the single satellite mode can average power from the same satellite up to 1000 times per second.

The multisatellite mode is set up somewhat differently. Six channels are assigned as the GPS receiver while six “daughter channels” receive Doppler offset and bulk delay for six different satellites. Each daughter channel is then stepped through a set of staggered half-chip delays to produce a “power versus delay” sample from single 1 ms integrations. Depending on the number of delay steps required, it can be seen that the multiple satellite mode cannot have as high a signal-to-noise ratio due to less samples of the reflected power per second. In the data reported here, the multiple satellite data will be identified where it affects results.

5.2 Software Algorithms

The implementation of the matched filter has been done in two ways that are aimed at performing averaging to improve signal-to-noise ratio: (1) adding data from equivalent delays and then passing to a cross correlation, matching section to select best fit and (2) one millisecond-by-millisecond cross correlation, scaling, adding, and passing the averaged waveforms through the cross correlator a second time.

The first method, called Single Model Waveform (SMW), tracks rapid changes in wind speed and tends to produce sharper detail and, hence, higher wind speeds. At the same time, this method is a bit noisier, since it is susceptible to a number of small disturbances in the addition process. For example, adding data entries from the same delay to improve signal-to-noise must take into effect the constantly changing satellite elevation angle which shifts the data in the range bins. This is somewhat handled by constantly updating the bulk satellite-surface-to-receiver given in equation (3). Changes in elevation angle or aircraft altitude can also cause the bulk delay to change with time. This updating happens at times set by the receiver internal 1 s position and satellite elevation angle receiver update rate. During the time between receiver updates, the data can “slide” across the range bins before it is corrected. The range-bin sliding can cause a smearing of the data waveform, while the automatic bulk delay correction adjustment causes an abrupt jump in the starting location of the data in the range bins.

The accuracy of the altitude solution from the receiver also impacts the value determined for h and can vary considerably as a result of aircraft maneuvers and the limited number of five or six satellites being tracked in the receiver.

The second method used for the wind speed retrieval, called Double Model Waveform (DMW), and employed for most of the data reported here does an immediate fit to the wind speed model waveforms after each 1 ms C/A code cycle and then scales each by the summed power from all range bins. The resulting model waveforms are automatically aligned, allowing elimination of most of the “sliding” effect and slow changes in altitude. This approach emphasizes stronger signal over weaker to help reduce effects of antenna sensitivity during aircraft maneuvers. The result is a summation of individual 1 ms model waveforms, added together to create a new “model waveform” to compare with the noise-free model waveforms for wind speed retrieval. This method is not only less susceptible to rapid changes but also less directly related to the actual range-bin by range-bin data than the first method. Consequently, in dynamic conditions, peak wind speeds can be attenuated using this method during eye wall transects, etc. The effect in reducing the retrieved values by a few meters per second at the highest wind speeds was considered acceptable, given the otherwise robust performance of the algorithm.

6 GPS Dropsondes and Extrapolation Methods

6.1 GPS Dropsondes

The main purpose of this paper is to present quantitative results of applying the GPS surface reflection technique to tropical cyclone wind retrievals. Typically, the only direct measurement of surface winds comes from aircraft-deployed devices called GPS dropsondes [Hock and Franklin, 1999], in situ instruments that have been deployed from NOAA P-3 Orion research aircraft since 1997. These devices drift in the wind field and transmit positions and atmospheric measurements back to the aircraft every ~5 m as they fall to the surface (2–5 min fall times from typical flight-levels of 1500–4500 m). GPS dropsondes are considered the standard against which many other measurement platforms are compared [e.g., Uhlhorn et al. 2003]. However, these measurements are spatially limited to the exact path in which they fall to the ocean surface. Additionally, although these devices typically report a surface wind (actually scaled to 10 m), these single point measurements represent a semi-Lagrangian instantaneous wind value that may not well-represent the maximum 1 min sustained 10 m wind that is utilized by the NOAA National Hurricane Center (NHC).

6.2 Dropsonde Data Processing

The wind speeds developed from the GPS dropsonde reports are quality controlled and subject to some corrections. There are two primary wind speed processing methods yielding what is called “mandatory pressure altitude” values as well as one referred to as the “1070 hPa” or nominal surface. These are the values contained in the “TEMPDROP” files available from the NOAA Atlantic Oceanographic and Meteorological Laboratory Hurricane Research Division. Editing consists of checks on GPS dropsonde data consistency, smoothing to take out the effects of gusts, continuity of transmitted data, etc.

The other dropsonde data form is the result of applying an interpolation algorithm developed by the NOAA National Hurricane Center that produces a value called WL150, defined as the average wind over the lowest available 150 m of the wind sounding. The WL150 wind speed must then have an extrapolation algorithm applied to give a equivalent surface value. There are other forms of the GPS dropsonde data (e.g., raw and post processed), some of which frequently report actual wind speeds until splash-down occurs. For this paper the “1070” (i.e., 10 m surface) value has been selected because it is considered a standard, and detail in the other values would not likely provide a better comparison given the drift of the GPS dropsondes and other variables.

7 Editing the GPS Wind Speed Retrievals

The past decade or so has involved a large number of storm penetration missions with a large number of heterogeneous conditions. During this research, some of the fundamental characteristics and limitations of the GPS surface reflection technique have been identified that bear on making comparisons with either GPS dropsondes or alternative methods of obtaining ocean surface winds. Less well-defined influences will be discussed but are not used in the determination of acceptability of data sets for comparisons.

7.1 Over Land Exclusion

Certain physical restrictions have been identified regarding the use of the GPS surface reflection technique. First, the technique cannot be used over land for obvious reasons. While it has been established that surface winds can be retrieved from shallow water over some range of wind speeds, this must be considered of limited application. An example of this is shown in Figure 4. The GPS surface reflection technique retrieval process is seen to fail for sections of the plot, the consequence of a weak signal unrelated to wind speed reflected from the land surface. Moreover, closeness to the shoreline impacts the GPS wind speed retrievals in an as yet unmodeled way associated with fetch.

Figure 4.

(a) Aircraft flight track and (b) wind speed measurements for GPS surface reflection technique made during a NOAA P-3 Orion Post-Landfall Mission in Hurricane Dennis (10 July 2005). Figure 4a shows flight segments with retrieved wind speed greater than 5 m/s as shown in Figure 4b. The aircraft flight path takes in overland segments with dropsonde deployments. Lack of open water makes GPS retrievals impossible and must be excluded.

7.2 Inside Storm Eye Exclusion

Another limitation for the GPS surface reflection technique is the characteristic of tropical cyclone eyes having a zero or nearly zero wind speed, while at the same time the ocean surface remains agitated. This disturbed sea still exists in the eye due to waves propagated from the nearby eye-wall and inner core storm circulation. Because the GPS reflection method operates on the basis of surface roughness, the technique does not produce a zero wind speed within the storm eye. A typical example of such an occurrence is shown in Figure 5 in which a NOAA P-3 Orion transect was flown through a hurricane and a GPS dropsonde was deployed in the eye. In this relatively calm region of the storm, the GPS dropsonde reported a 1–2 m s−1 wind speed, while the GPS surface reflection technique retrieval reported over 10 m s−1. Although this disparity between surface wind speeds derived from ocean surface roughness and the true surface wind speed is common in the tropical cyclone eye region, no similar effect in “straight-line” winds has been found in data sets from frontal boundaries, etc. The tropical cyclone eye region can be identified directly from the aircraft position, flight level winds, and atmospheric pressure measurements or indirectly by noting occurrences of high GPS reflection retrieved wind speeds and a very low GPS dropsonde surface wind speed.

Figure 5.

Plot of (black dots) GPS dropsonde 10 m wind speeds (m s−1) and (red curve) GPS surface reflection technique wind speeds versus time during a NOAA P-3 Orion mission in Hurricane Ophelia (11 September 2005). The near-zero surface winds shown by the GPS dropsonde at t = 68,800 indicates when the aircraft was in the eye of the storm. Note that the GPS surface reflection technique wind speeds do not drop below ~12 m s−1 during this time.

Fetch effects in the data can sometimes be identified by the opposite case of high GPS dropsonde winds and low or no wind speed from GPS surface reflection technique retrievals. In these situations, the ocean surface roughness is relatively low and does not accurately reflect the surface wind speed.

7.3 GPS Dropsonde Drift

After release, GPS dropsondes become entrained in the wind field through which they descend. Since the fall time may take several minutes through a 20–70 m s1 cyclonic wind field, the spacing between the GPS dropsonde launch and splash positions can be 20 km or more. Since the GPS reflection comes from fairly close to directly below the aircraft, considerable spatial separation between the two measurements may occur due to splash-down time versus GPS data acquisition. The entrainment of the dropsonde in the circulating winds might be expected to help ameliorate the possible differences between winds at dropsonde launch point (where the GPS surface reflection technique data is taken) compared to GPS dropsonde winds measured at the splash point. The GPS surface reflection technique has also been applied to high altitude aircraft (e.g., the NOAA G-IV jet operating at a flight-level of 12–14 km). An example from a Hurricane Charley circumnavigation is given with GPS dropsonde locations below in Figure 6.

Figure 6.

Example highlighting the possibility of GPS dropsonde drift on comparisons with GPS surface reflection technique wind speed retrievals. (a) GPS dropsondes may require several minutes to splash-down, while the (b) GPS surface reflection technique recorded data from a slightly different area of the ocean surface. Close examination of Figure 6a shows, even at this scale, significant deviation of the dropsonde splash point from the aircraft flight path and likely location of the GPS data acquisition point. Figure 6b vertical scale is in meters per second.

7.4 Elevation Angle Effects

The bivariate nature of the slope probability density discussed earlier results in a variation of the model waveform as a function of angle between satellite and wind direction. At vertical incidence, this anisotropy disappears due to symmetry. As a consequence, the calibration obtained in 2006 was based on limiting wind speed retrievals to elevation angles greater than 60°. The high-signal-to-noise-ratio mode of receiver operation selects the highest elevation angle satellite that is rising for data acquisition. This ensures that the wind speed retrievals are less affected by elevation angle, but some influences may still remain.

The lower signal-to-noise-ratio multisatellite mode allows for as many as six satellites to be tracked with various elevation angles for each reflected satellite data set. The various elevation angles are a natural consequence of GPS position determination, since the satellites are distributed in azimuth and elevation to permit high accuracy location results.

The lower elevation satellite retrievals sometimes show what appears to be azimuth anisotropy particularly at higher winds as can be seen in Figure 7. While this anisotropy may be related to wind direction [see, e.g., Komjathy et al., 2004; Cardellach and Rius, 2008], a wind direction effect has not been quantified for use in wind speed retrievals. Consequently, the lower elevation satellites (elevation angle less than 60°) cannot be used for quantitative comparisons. Moreover, these satellites tend to have lower signal-to-noise-ratio (from antenna pattern) and are less desirable for comparisons with dropsondes.

Figure 7.

As elevation angle decreases for GPS satellites, elevation angle effects can appear. These effects believed to be related to wind direction are not included in the calibration curve. Illustrated is a transect through Hurricane Earl using retrievals from two GPS satellites at different azimuths but similar elevation angles. A calibration for this effect has not been done, so only satellites with elevation angles above 60° are used.

Therefore, the elevation angle limit below which the data is not used in the comparisons was set at 60°, consistent with the equivalent limit used for the COAMPS calibration.

7.5 Doppler Effects

There is the possibility of a Doppler effect as well. At the specular point, the Doppler shift from the satellite is identical to that in the direct signal as can be seen from the assumed plane surface and reflection geometry. For signal emanating from surface point away from the specular point, a Doppler shift error is developed. The 1 ms integration time can be thought of as a band pass filter of 1 kHz, so as long as all important components of signal from the surface produce Doppler shift significantly lower than 1 kHz, the shift can be ignored. For a P-3 (at 200 knots or approximately 100 m/s) and tropical storm winds, the Doppler shift would be on the order of ±100 Hz at an outer range bin, while for the Gulfstream IV (aircraft speed of 400 knots, approximately 200 m/s) would be considerably less, since the aircraft does not fly into the tropical storms. Consequently for these data sets, Doppler effects from aircraft motion can be ignored.

8 Results

This study presents results from both unedited and manually edited GPS surface reflection technique data. Manually edited refers to cases of low GPS dropsonde wind speeds and abnormally higher GPS surface reflection technique winds that are not plotted or analyzed. Editing is used to remove retrievals done with satellites whose elevation angles are too low for use of the calibration curve as well as retrievals done when the aircraft deployed a dropsonde over land. Since the lower signal-to-noise-code used on some of the flights allows lower elevation angle than the alternative code, considerable attrition occurs with these retrievals.

The selected dropsondes for comparison were those that had undergone quality control and reported in the standard Hurricane Research Division TEMPDROP (or, equivalently, HSA) files formats. A total of 431 dropsondes were found to have been released at the same time GPS reflection data was acquired using the high signal-to-noise-ratio single satellite receiver mode.

For the lower signal-to-noise-ratio receiver mode of operation, up to six satellites could be in view at the same. In the cases here, 147 dropsondes were deployed during operation of this multisatellite mode, and 794 total observations-retrievals were done.

Without any selection or editing for elevation angle over land or data taken in the storm eye, a total data set from the storm penetrations amounted to 1225 retrievals with an associated dropsonde for comparison.

The retrievals are then filtered to eliminate those below 60° elevation angle leaving a total of 296 meeting this criterion for the high signal-to-noise-ratio receiver mode and 174 meeting the same criterion for the other mode. The final total of 470 dropsonde-GPS observation-retrieval pairs constitutes the basic edited data set. The large attrition of observation-retrieval pairs is mostly a result from the limits on the low signal-to-noise acquired raw data. While operating in the “low signal-to-noise mode,” the receiver code frequently records data from six satellites. Only one or two satellites will be at elevation angles greater than 60°, so most fail to meet this criterion and are rejected.

The complete data set for the high signal-to-noise, single satellite mode is shown in Figure 8 without any editing for elevation angle over-land or deployment of a dropsonde in the storm eye. With no filtering at all, the standard deviation from the linear fit is 8.198 m/s. If only the satellites with elevation angles greater than 60° are used, the standard deviation improves to 5.89 m/s. This latter number indicates that the COAMPS derived calibration is still useable to some extent below the limits for which it was developed.

Figure 8.

High signal-to-noise ratio receiver mode. (a) No editing, comparison of GPS wind speed retrievals with dropsondes from all missions. (b) Result of manual selection to remove over-land dropsondes and those that land inside the storm eye. For both cases, the red line is the unity slope curve for reference, while the blue line is the regression line.

With removal of satellites below 60°, over-land and dropsondes in the storm eye, the standard deviation improves to 4.905 m/s for the remaining 276 retrieval-dropsonde pairs.

For the results using the lower signal-to-noise code and shown in Figure 9, the performance is somewhat worse, commensurate with the expected signal-to noise-reduction. With no editing, the standard deviation is 10.07 m/s. The overwhelming majority of the 794 pairs have elevation angles below 60°. Removing those with elevation angles below 60° gives an improved standard deviation of 8.23 m/s for the remaining 174 pairs. Further eliminating over land and within-the-eye dropsondes leaves 160 pairs and improves the standard deviation to 7.2 m/s

Figure 9.

Lower signal-to-noise ratio receiver mode. (a) All retrievals without any editing. (b) Result of eliminating retrievals from satellites with elevation angles less than 60°, as well as manual selection to remove over-land and dropsondes that fall inside the storm eye. The red line is the unity slope curve for reference, while the blue line is the regression line.

9 Discussion

As expected, the multisatellite mode data did not perform as well as the single satellite mode data. The difference in overall root-mean-square deviation from the linear fit for the lower signal-to-noise multisatellite mode is approximately 50% greater for the fully edited data than for the higher signal-to-noise single satellite mode. Since the C/A code repeat time is fixed at 1 ms, the maximum available scans per second per bin is 1000. The lower SNR mode in these data sets came from scanning the delay bins as frequently as 70 times per second (with 14 delay steps), a result stemming from the necessity of having one channel in the receiver assigned to each monitored satellite with a single fixed correlator. The high SNR mode “stares” at the same delay bin and can consequently produce 1000 integrations per second. The square-root of the ratio between the number of data values for each mode in 1 s is 3.74. The ratio between the standard deviations for low SNR and high SNR modes as shown in Table 1 is not this large and implies that the number of independent fading noise samples must be somewhat less than the 1000 possible for each second, consistent with correlation times found for surface features.

Table 1. Summary Comparison of All Data From High and Low Signal-to-Noise Codea
 RMS (Raw, No Editing, All Satellites)RMS Elevation (Angle >60)RMS (Full Editing, No Land Falling, Elevation >60, No Eye)
Receiver ModeRMSNumberRMSNumberRMSNumber
  1. aThe entry labels are detailed in the text.
High SNR8.20 m/s4315.89 m/s2964.91 m/s276
Multi-sat (Lower SNR)10.07 m/s7948.281747.20 m/s160

In general, there is still what appears to be some underreporting of wind speed as evidenced by the lower than unity slope of the regression line. Some of this may be attributable to the different implementation of matched filter, because processing in the double model waveform (DMW) method tends not to capture the peak winds as well as the single model waveform (SMW) method. Other effects might come into play as well such as bathymetry, fetch, storm transition (growing or decaying), storm translation, etc., but the impact of these will require more research.

The choice of using only data with elevation angles greater than 60° was based on the necessity to match the same conditions used in the original calibration. The use of the high elevation angles was to eliminate errors due to anisotropy. As noted earlier, this anisotropy may be related to wind direction and is likely dependent on elevation angle and wind speed. Nevertheless, some effect of anisotropy must still exist in the retrievals.

Limiting the data sets to higher than 60° rapidly reduces the number of data points. On the other hand, limiting the satellites to high elevation angle seems justified by the generally good results using both the single satellite mode and the lower signal-to-noise receiver mode of operation as well as the different software implementations used for retrieval.

10 Conclusions

The results presented here indicate that reflected GPS signals are capable of being used to determine ocean surface winds from near-zero wind speed to over 40 m s−1. Over a decade of data acquired from a modified GPS receiver flown into tropical cyclones by the P-3 Orion Hurricane Hunters of NOAA's Aircraft Operation Center have been analyzed to quantify the GPS surface reflection technique retrieval accuracy. It has been shown that the GPS surface reflection can be used to determine wind speeds in excess of 40 m s−1 to between 5 and 8 m s−1 root-mean-square accuracy. This is likely to improve with the definition of wind speed anisotropies evident for satellites at elevation angles below zenith and better comparison done at actual locations where the GPS dropsondes reach the ocean surface.

Certain caveats were noted that should be observed when applying the GPS reflection technique, including the fact that the associated wind speed retrievals cannot be made over land, that residual surface roughness in the eye of tropical cyclones yields a wind speed retrieval where the surface winds are near calm, and that satellites with high elevation angles should be used for wind speed retrieval to avoid effects possibly resulting from wind direction. Moreover, effects of fetch near land masses can affect the accuracy of wind speed retrievals. Many of these can be avoided by simple operational means, while some cannot.

While the GPS surface reflection technique is not meant as a replacement for alternative methods, the measurement of ocean surface roughness is an important geophysical parameter in itself. Furthermore, the continued development of GPS and other global navigation systems make the current performance likely to improve. Coupled with the extreme low cost, low mass, and power requirements, the small size makes the GPS surface reflection ideally suited to a broad spectrum of platforms including that within the GPS dropsondes themselves as well as those onboard Unmanned Aircraft Systems.

Appendix A

This appendix summarizes some of the geometry and approximations on which the retrievals presented in this paper are based.

A coordinate system is shown in Figure A1 for use in defining the vectors and angles required to code into an algorithm.

Figure A1.

Surface coordinate system.

The various vectors and distances are given by

display math(A1)

where inline image is the vector from GPS satellite to the surface, inline image is the vector from the surface to the receiver, ρ is the distance from the surface reflecting point to the receive, h is the height of the receiver over the surface, Xs is the surface distance from the specular point to the surface point below the receiver, and γ is the satellite elevation angle. Various slope vectors and their magnitudes can be calculated directly, but the complexity of the form is eased by realizing that only the ratios inline image and inline image are needed. Subtracting inline image from inline image and separating the components gives

display math(A2)

for the tangents of the x and y slope angles, respectively, of the scattering vector with respect to the z-axis.

After some simple approximations, the tangent scattering vectors can be simplified to

display math(A3)

For the x-component and similarly for the y-component,

display math(A4)

The received signal can be expressed as an integral over the surface of the slope probability function and the lambda-squared function transferred to the surface, with a variable delay that comes from the adjustable delay of the reference code in the receiver [e.g., Katzberg and Garrison, 1996; Zavorotny and Voronovich, 2000]:

display math(A5)

where the integration is over all the surface and τd is the controlled delay in the receiver times the speed of light. While this appears to be a convolution, it is not. From the Beckmann development of the surface reflection geometry, the delay δ at any point x and y of the surface can be expressed as

display math(A6)

Note that this result reduces to 2h sin (γ), when x and y are zero and is the specular point delay referred to in the text. Solving for the locus of points for constant delay gives a family of ellipses with center

display math(A7)

The semimajor and semiminor axes are, respectively,

display math(A8)

For most applications, the effect of the squared delay can be ignored and the semimajor and semiminor axes reduce to

display math(A9)

Using a bivariate Gaussian as representative of the core of the Cox and Munk relationship gives the following integral to be evaluated for the signal as a function of delay, δ:

display math(A10)

The integration over the surface can be simplified by a change of variables for x to x′ = x ⋅ sin (x). This converts the ellipses of constant delay to circles allowing the differential delay to change from rdrdφ. The terms tan 2βx or tan 2βy take the form inline image, respectively. The differential area now takes the form (with C the speed of light):

display math(A11)

It can be seen that the integral over the surface can be described as a linear relationship between area and time delay, τ.

Moreover, the argument of the slope probability density is now proportional to r2 = 2hCτ/sin(γ) making the exponential linear in τ. This means that the expression for power captured from the surface is converted into a simple exponential after integration over azimuth angle.

Azimuth effects can be expressed in a form that illustrates why high elevation angle satellite data is relatively immune to wind direction and what form the effective mean squared slope takes. Changing the exponential into a function of the polar coordinates gives the form:

display math(A12)

This now can have added and subtracted a factor that completes the square in the argument of the exponential, allowing the exponential to be separated into

display math(A13)

This can be expanded in a power series and integrated or it can be recognized that the azimuth integration will yield a modified Bessel function of the first kind in the (squared) radial variable [Beckmann and Spizzichino, 1963, Appendix E].

Taking the exponential and the first term of the expansion or the first terms in the modified Bessel function, it can be seen that the integral over the surface of the probability density function and Lambda-squared function will also be in terms of an exponential, linear in delay, multiplied by higher-order terms in delay:

display math(A14)

where

display math(A15)

where now inline image. At high elevation angles, this can be reinserted back into the exponential to give an effective inverse mean squared slope of inline image.

Finally, the “samples” at each range delay can be converted into samples at each delay time by using inline image and expressing the integral as follows:

display math(A16)

This shows that the slope probability density takes the form of an exponential function of time (or code) delay whose effective time decay constant is directly related to the effective mean square slope, the height of the receiver, the speed of light, and inversely to the sine of the elevation angle.

display math(A17)

This result will be used in Appendix B in modeling the functions used to retrieve wind speed and the effect of noise on the retrieval.

Appendix B

The process of recording the surface reflected power in one-half code chip steps and then cross-correlating these samples with samples from a family of model waveforms to determine wind speed is equivalent to a matched filter processing. The matched filter approach is a technique in signal processing, which maximizes the signal-to-noise-ratio found in the linear processing of a known signal contaminated by a noise process [see Carlson, 1968, pp.406–409] usually but not necessarily assumed to be white. The matched filter concept can be understood with reference to the Schwartz inequality with real functions, f and g

display math(B1)

This inequality is satisfied only if the two functions match within a constant factor, giving a maximum which cannot be exceeded. At the same time, the noise is not affected in the same way, thus yielding a maximum signal-to-noise for the “matched” waveform.

In the case here, model waveforms expressed in terms of delay time are created and then cross-correlated with the acquired data, contaminated by noise. The acquisition of the GPS signal related to power reflected from the surface represents an integration of signal in range bins. These range bins have a sensitivity profile created by the cross correlation and then squaring of the pseudo-noise code modulation function. As described earlier in the text, this process is equivalent to passing the signal through a smoothing or “instrument” (sometimes referred to as an “aperture”) function. (The model waveforms are also the result of passing an ideal spdf through the same instrument function to ensure equivalent processing.) The instrument function is called a “Lambda-squared” function because of its relationship to a squared triangle waveform, with chip size Tc, and delay parameter t defined as

display math(B2)

The desired conditions are (1) that the smoothing caused by the instrument function must not appreciably modify the form of the slope probability density, while at the same time (2) the reflected signal is recorded in delay steps fine enough so that the surface slope probability density is sampled at the Nyquist rate [Carlson, 1968]

When considered from the point of view of the Fourier transform, the instrument function must not appreciably distort the spdf low-frequency components, while the sampling density must be sufficient to avoid higher-order harmonics that result from the sampling process. A sufficiently sampled slope probability density coupled with a narrow aperture function will accomplish this.

The Lambda-squared function has a full-width-at-half-maximum of 1.082Tc so the first zero of its Fourier transform is at approximately 1/(Tc).

An example case typical of the retrievals is shown in Figure B1a in which the sampling interval is two and one-half times faster than the “decay constant” of a slope probability function associated with a wind speed of 30 m/s with a platform altitude of 5 km. As can be seen in Figure B1a, the effect of the “aperture function” on the low frequency components of the spdf Fourier transform is small. The major effect of the “decay constant” is in the lower frequency portion of the Fourier transform, and the effect on the higher frequencies represents itself as “ripples” in a reconstructed waveform.

Figure B1.

(a) Illustration of the effect of using a sinc function to reconstruct the continuous function from the receive power versus delay samples. Power versus delay “time constant” is two and one-half times the half-code chip step interval. (b) Illustration of the effect of using a simple interpolation function instead of a sinc function to reconstruct the continuous function from the receive power versus delay samples.

From sampling theory, the reconstruction of the original spdf can be done by convolving the individual sequential values with an interpolation function. While the ideal interpolation function is a sin(x)/x (“sinc” function), such a function cannot be actually realized because it has significant values that extend over an infinite range of x-values.

A simpler interpolation function would be to take a constant value equal to the sample value at that point and extend it half the interval to the previous and next samples, a “block-like” interpolation function. Such a function would exist for one half the delay sampling interval, Tc, with peak value 1/Tc the integrals will be nonoverlapping. The signal contribution comes from the center data sample multiplied by the area of the interpolation (block) function unity.

Since such a function is half the width of the sampling interval, its Fourier transform will have its first zero near that of the sinc transform or 1/Tc. In the transform domain, the effect is to multiply the signal transform by the interpolation transform, a sin(x)/x. As can be seen from Figure B1b, this additional filtering has a small effect on the lower frequency portion of the data spectrum.

As far as noise is concerned, the down-conversion and multiplication of the broad-band receiver noise by any particular PRN code yields a noise power spectral density that is modified by the Fourier transform of the Lambda-squared function in the same way as the signal. It should be noted that the summation of the noise process over the 1023 code chips (1.0 ms) has the effect of passing this noise through a 1 kHz filter and sets the fundamental kT noise power spectral density for the measurement.

In summary, the GPS reflection data, a representation of the surface slope probability density function, is modified by a smoothing or instrument function and is sampled. The instrument function is sufficiently narrow that the important lower frequency components of the spdf are not modified appreciably. Taken with the sampling interval, those spdf's characteristic of aircraft altitude, mean-square-slopes associated and wind speeds from a few meters per second and above are close to the Nyquist rate for reconstruction. Therefore, the conditions required for a “matched filter” approach exist, justifying the approach to wind speed retrievals presented in this paper.

Appendix C

The retrieval of wind speed from over water measurements of GPS power-versus-delay have been done by using a matched filter approach. As noted in Appendix A, a matched filter produces the highest peak signal-to-r.m.s-noise ratio possible for linear processing. In this processing method, a (likely scaled) version of a transmitted pulse shape plus additive noise is received and cross-correlated with a replica of the transmitted pulse. The data is in the form of samples of the reflected power from the surface taken at sequential steps in commanded delay of the internally generated pseudo-noise code. The two (one, idealized) “pulse” samples are scaled by the energy in each pulse and divided into a squared result of the cross correlation of samples of both the input signal and a set of idealized replicas, one for each wind speed. At perfect overlap and perfect pulse match, the obtained peak is greater than for any other pulse shape. When scaled by energy, the noiseless ratio is less than or equal to one. In the case for GPS-based wind speed retrievals, the form of this ratio can be expressed as follows:

display math(C1)

where D is the unknown power-versus-delay model waveform, R′ is the reference test waveform, τ is the code delay, and inline image is the fluctuating term resulting from squaring the raw noise voltage received from the downward looking antenna.

Making a simple expansion of the reciprocal of the bottom term and multiplying the top and again retaining only first order in noise gives

display math(C2)

Note that the deviation from perfect match appears in the first term, strictly from the difference between the two functions, while the effect of noise arises not only from the noise but also from the mismatch between reference (test) and unknown function. The noise terms tend to zero as the unknown and reference functions become similar to each other.

The noise term for the test function represents the result of passing the noise through the test function as impulse response of a processing filter (actually, with negative time argument, see Carlson [1968]). In actual implementation, the integral is replaced by a sum of samples of the test function multiplied by the noise at each recorded sample.

Using the results of Appendix A, the form of the idealized power versus delay and hence the model waveforms is that of an exponential, decaying with delay and having an effective “time constant” defined by

display math(C3)

The simple form of the idealized surface power versus delay makes it possible to evaluate the noise effects in a reasonably simple fashion.

Further simplification comes from the close approximation of the power versus delay to a simple exponential function of delay with parameters related to altitude, elevation angle, and slope probability density) and using the following:

display math(C4)

The noise samples are the result of squaring the incoming signal plus noise to produce a power versus delay. The input noise itself is made up of a Gaussian-distributed process, so when squared, a non-central chi-square function results. The average value of the squared-noise is removed from the data, leaving a chi-square distribution. The noise values at the samples are independent random variables.

The factors affecting the retrieval noise are comprised of (1) the “noiseless” ratio of reference and actual ideal waveforms, (2) the “DC” term of time-averaged squared kT noise passed through the matching waveform, and (3) the difference between the noise processed through the test filter function and the actual reflected waveform.

display math(C5)
display math(C6)

K is the amplitude of the data waveform after all down-conversion and squaring. It is “how much larger” the true waveform is than other factors such as the “DC” noise level. Ri is the test waveform and Di is the idealized data waveform without fading noise. Note that in the last equation, the “DC” noise term has been removed, which is done before the matched filter cross correlation.

The summations on the left side of the inequality are themselves random variables. In particular, they are weighted sums of two noise components: (1) Squared Gaussian distributed thermal noise samples from identical processes. (2) The second noise effect is from the squaring of the composite surface random phase sinusoids (fading noise.) The fading noise power is characterized as an exponential density (Papoulis, 1965.) These two processes are independent: The kT noise at each 1 ms sample while the fading noise is correlated longer due to the relatively (to 1 ms), slowly mutating surface. Samples from the two noise processes can be combined into one sum of weighted squared random variables (r.v.'s).

The number of terms in the complete evaluation of the probability density of the sum of 10s of data samples grows rapidly with a few tens of added samples, and is completely unwieldy at 100's of samples both the fading and kT noise. Use can be made of the Tchebycheff inequality to set bounds on the probability density for the sum in terms of just mean and variances.

The probability that a random variable with arbitrary density, mean η and variance σ2 is outside a distance xη is given by

display math(C7)

These can be had via first and second derivatives of the known characteristic functions for the two noise densities (kT and fading):

display math(C8)
display math(C9)

In the case here, the mean and standard deviation of the summed noise processes can be found reasonably easily from the characteristic function of fading and kT noise:

display math(C10)

The factor N is the number of samples being averaged, M is the number of correlated samples from the fading noise (1 ≤ MN), ΔTc is the separation between waveform samples, σ2 is the variance of the kT noise from the RF chain, and the function of τ inside the parentheses is the test and actual “time constants” of the data. Note that the mean is removed before the actual processing for “best match.”

The Mean and S.D. (Standard Deviation) of this processed noise approach zero (to first order) as the two waveforms become matched. At a match, the processing basically subtracts two identically processed replicas from one another. The variance also goes down as the number of samples are averaged making the standard deviation reduce by the square-root of the number of samples. Moreover, the effect of multiple “looks” at the data appears through the factor ΔTc/τ′ which is the reciprocal of the number of sample step per decay constant. The signal mean value does not change with averaging. Consequently, the signal-to-noise increases by the number of samples of both repeated measurements and by the number of samples per data decay constant. If the fading noise is correlated by a number of samples before independence can be certain, then the increase in SNR would only increase by the square-root of the ratio of N to the number of correlated samples.

From the expansion of the matched filter process, equation (C6), the comparison of the error to the matched filter factor takes in the signal-to-noise by means of the factor K. This factor can be interpreted to be how much greater the signal is than the noise standard deviation, since it represents the scale factor that converts the true model waveform into the detected signal with additive squared kT noise. When divided by the standard deviation of the kT component of noise, K is the non-fading white noise signal-to-noise ratio. For this reason K may be identified with inline image.

To estimate the wind speed error from these noise sources, the degree to which the noise exceeds the factor on the right hand side above must be determined. Since we know the standard deviation of the noise, the probability of exceeding the right hand side can be thought of as the degree to which the probability of the summed noises exceeds some factor k times the noise variance.

From the Tchebycheff result, the factor k can be identified with the ratio

display math(C11)

The probability that the weighted noise is greater than k ⋅ S. D. is (1/k2). The probability densities for the summations at each sample point are convolutions of gamma and exponential densities and are unimodal. In such case, the Tchebycheff limit on the probability density can be refined further by using the Vysochanskii-Petunin limit, which gives the relevant probability to be less than or equal to 4/9 times the Tchebycheff limit.

Acknowledgments

The work reported in this paper could not have been done without the assistance and cooperation of the NOAA Aircraft Operation Center at MacDill AFB, Florida. The flight crews, engineers, technicians, and scientists there made a potentially complicated interface easy.

Ancillary