Inverse filtering of radar signals using compressed sensing with application to meteors

Authors


Corresponding author: R. Volz, Department of Aeronautics and Astronautics, Stanford University, 496 Lomita Mall, Stanford, CA 94305, USA. (rvolz@stanford.edu)

Abstract

[1] Compressed sensing, a method which relies on sparsity to reconstruct signals with relatively few measurements, provides a new approach to processing radar signals that is ideally suited to detailed imaging and identification of multiple targets. In this paper, we extend previously published theoretical work by investigating the practical problems associated with this approach. In deriving a discrete linear radar model that is suitable for compressed sensing, we discuss what the discrete model can tell us about continuously defined targets and show how sparsity in the latter translates to sparsity in the former. We provide details about how this problem can be solved when using large data sets. Through comparisons with matched filter processing, we validate our compressed sensing technique and demonstrate its application to meteors, where it has the potential to answer open questions about processes like fragmentation and flares. At the cost of computational complexity and an assumption of target sparsity, the benefits over pulse compression using a matched filter include no filtering sidelobes, noise removal, and higher possible range and Doppler frequency resolution.

1. Introduction

[2] Compressed sensing [Donoho, 2006] is a new data acquisition and processing technique that leverages sparsity in the signal being measured in order to reduce the number of measurements needed to accurately reconstruct the signal. Many signals of interest are sparse or compressible and can be well-approximated by a relatively small amount of information when compared to their “raw” form. The current approach in many fields is to sample the data in its raw form and then compress and store it. Often it is only the “useful”, compressed information that was desired in the first place. Compressed sensing allows one to skip the inefficient raw sampling step and instead acquire an entire signal with an amount of information proportional to the signal's compressed representation.

[3] Because radar signals are quite recognizably sparse in range and frequency, with typically few targets of interest within range, radar is a natural fit for compressed sensing. The role of sparsity in radar signal processing and how compressed sensing techniques relate to established processing methods is discussed by Potter et al. [2010] with an emphasis on synthetic aperture radar. The potential for compressed sensing to reduce radar hardware complexity and cost is noted by Baraniuk and Steeghs [2007] and Ender [2010], while Herman and Strohmer [2009]explores the use of compressed sensing for increased target detection resolution. It is this latter use that we are most interested in as a way to increase the resolution of meteor measurements made with high-power large-aperture (HPLA) radars.

[4] When a meteoroid enters the Earth's atmosphere, it collides with air molecules and heats up, causing ablation. This results in the formation of a plasma, called a meteor, which we can measure with an HPLA radar due to electromagnetic scattering. The plasma surrounding the meteoroid is called a meteor head, while the plasma that is left behind is called a meteor trail. Trails are classified as specular if the scattering occurs from a trail that is perpendicular to the radar beam and as nonspecular otherwise, the latter normally occurring when the radar beam is nearly perpendicular to the Earth's magnetic field. Unfortunately, the evolution of the plasma and the nature of the scattering from both the head and trail are not well understood and depend on the density of plasma and its orientation with respect to the background magnetic field [Close et al., 2004]. Often the meteor head, assumed to be small relative to the range resolution of the radar, is treated as a point scatterer, but this will not suffice for elucidating the more complicated aspects of meteor head echoes. Since in reality a plasma is a distributed collection of charged particles, we require a measurement method that is suitable for range- and Doppler-spread and allows for high range resolution imaging; our compressed sensing technique provides such a method.

[5] Compared to the existing radar signal processing literature, compressed sensing with a discrete radar model is most closely related to the amplitude domain analysis of Vierinen et al. [2008] and the inversion filters (also known as mismatched filters or zero sidelobe filters) of Lehtinen et al. [2009], Damtie et al. [2008], and others. The amplitude domain method of Vierinen et al. [2008]uses a range sparsity assumption like our compressed sensing method, but it requires the user to manually specify the target model with respect to range before applying the inversion procedure whereas compressed sensing is fully automated. Inversion filters are simpler to implement and provide unbiased signal estimates for range-spread targets, but they require the use of specific transmission codes (so-called perfect codes) to achieve peak sensitivity. At the cost of some computational complexity and an assumption of range-Doppler target sparsity, compressed sensing combines many of the strengths of these existing techniques into a general approach suitable for a wide class of transmission waveforms. With respect to the standard of pulse compression using a matched filter, signal processing benefits include no filtering sidelobes, noise removal, and high range and Doppler frequency resolution not directly constrained by the sampling rate or pulse length. In terms of applications, this makes compressed sensing ideal for both detailed imaging of localized (but still possibly range- and Doppler-spread) targets and identification of multiple targets that are closely spaced in range and/or range rate. Provided that amplitude domain voltage data (as opposed to correlated and integrated data) is available for post-processing, no hardware upgrades are required to take advantage of compressed sensing, and as our examples show it is even possible in some cases to reprocess existing data and gain new insights.

[6] Applying compressed sensing requires a suitable radar model. A common approach, and one that we follow, is to discretize the target reflectivity in a joint time delay (or range) and Doppler frequency shift space. That is, we represent the received signal by a linear function of reflectivity coefficients, where each coefficient multiplies a time delayed and Doppler-shifted version of the transmitted signal. Thus, the discrete signal is expressed in terms of a Gabor frame, a model which is efficient to compute and is compatible with the framework of compressed sensing.

[7] Similar models for radar [Herman and Strohmer, 2009] and communication channels [Bajwa et al., 2008] have been used previously with compressed sensing. With the same goal of high resolution radar, Herman and Strohmer [2009] investigate the use of Alltop sequences as compressed sensing radar waveforms. For their model, they find that range and Doppler frequency resolution depend on the inverse of pulse waveform bandwidth and total sampling time, respectively. They also prove an upper bound on the target sparsity s for which solution is guaranteed with high probability and provide simulation results that indicate that the proven bound can be relaxed to s ≤ m/(2log m), where m is the number of measurements. The development and results of Bajwa et al. [2008] proceed in much the same manner, except for the use of spread spectrum waveforms and the application to communication channels.

[8] Both prior works provide a good foundation for using compressed sensing with radar from a theoretical perspective. What they lack are answers to more practical questions: How does the discrete model, essentially assuming point targets at very specific ranges and Doppler shifts, relate to a continuous radar model that allows distributed targets at arbitrary locations in the range-Doppler space? How well does the technique work on real data which inevitably includes effects not present in the model? How can one implement the technique efficiently and with possibly large data sets? These are the questions that we set out to address in this paper.

[9] Our development of a radar compressed sensing method begins with the derivation of a discrete linear radar model from a continuous one. From this, we find that solving using the discrete model gives an approximate lower bound on the total target reflectivity contained in a range-Doppler window. The resolution of this window is determined by the pulse waveform bandwidth and the choice of Doppler discretization, the latter being limited only by the number of measurements through a compressed sensing solution condition. We then describe how to implement our approach, solving for the target reflectivity using the large-scale optimization software TFOCS (Templates for First-Order Conic Solvers) [Becker et al., 2011a]. Finally, we apply the method to ionospheric plasma data taken with the Poker Flat Incoherent Scatter Radar and find that the solution agrees with that of a matched filter, validating the compressed sensing approach in a practical setting.

2. Compressed Sensing Overview

[10] Under the standard framework for compressed sensing, we seek to determine a vector signal math formula using m linear measurements with m ≪ n. Letting math formula denote the measurement vector, we can in general write its entries as an inner product yk = 〈yϕk〉 for k = 1, …, m for some math formula. In matrix notation, we wish to solve for f in y = Φf, where Φ is the m × n matrix with columns given by ϕk. Without further assumptions, this problem is ill-posed since ordinarily we would requirem > n measurements to reconstruct f. A simple typical case would be where Φ is the identity matrix and the measurements yk are just the individual entries of the signal f. The surprising result of compressed sensing is that, under achievable conditions, the underdetermined problem with m ≪ n is solvable.

[11] The first condition is compressibility or sparsity of the signal, which is a requirement that the signal be well-represented by a relatively small number of coefficients corresponding to elements in some dictionary. Given an orthonormal basis math formula, we can write f as math formula, where the coefficients xk = 〈fψk〉 are given by the inner product between the signal and each of the basis vectors. We call f compressible if there is some orthonormal basis such that f ≈ fs = Ψxs, where Ψ is the matrix whose columns are the basis vectors ψk, xs is the vector of coefficients x with all but the largest s entries set to zero, and s ≪ n. As the success of lossy compression schemes demonstrate, many signals of interest satisfy this condition.

[12] The second condition is called incoherent sampling and pertains to the measurements of the signal. We restrict our attention to measurements given by the orthonormal basis math formula. Given an orthonormal basis math formula, define the coherence between Φ and Ψ as

display math

This definition for the coherence comes from Candès and Romberg [2007], but μ can also be defined probabilistically or for linear measurements that are not given by an orthonormal basis [see Candès and Plan, 2011]. It can be shown that μ ranges between 1 and n. The incoherent sampling requirement is met when the coherence of the measurement set, Φ, and the basis with which f is compressible, Ψ, is close to 1. Conceptually, this condition ensures that the measurements are global in a sense, that each measurement contains information about almost every coefficient of the signal in the sparsity basis.

[13] If the signal is compressible and incoherent sampling is performed, then essentially we know that each measurement contains a contribution from each of the s coefficients. Intuitively, we might then expect to be able to reconstruct the signal from s measurements. This idea is made concrete with the incoherent sampling theorem presented below in the case of reconstruction with the Dantzig selector. No matter the specific setting, the principle of this theorem remains the same: if the signal is compressible with s coefficients and the sampling is incoherent, we can reconstruct the signal to within noise and approximation errors with a small constant times s log n measurements by solving a convex optimization problem.

[14] Of the compressed sensing methods that account for noise and only approximately sparse signals, the Dantzig selector [Candès and Tao, 2007] is of particular interest to us. Let A represent the measurement matrix whose m rows are randomly sampled from a population of measurement vectors with coherence μ (for instance, these can be randomly sampled from the rows of ΦΨ as defined above). Also, let the measurements be corrupted by noise given by math formula, a zero-mean i.i.d. Gaussian vector with varianceσ2. Therefore, the measurements are given by y = Ax + z. The Dantzig Selector is the solution to the optimization problem

display math

where λ is a constant that is selected so that the actual signal obeys math formula with high probability. Thus, the Dantzig selector finds a solution that has minimum math formula norm, promoting sparsity, and is highly probable given the measurement noise. Finally, we have the incoherent sampling theorem, which guarantees that the Dantzig Selector provides a solution which is within noise error of the exact solution. Incoherent Sampling Theorem for Dantzig Selector [Candès and Plan, 2011]: Suppose that a signal math formula is measured as described above, and let math formula. Let β denote a chosen constant, μ the coherence of equation (1), and math formula a chosen expected upper bound on the sparsity of x. If for a positive constant C0 (the exact value is not important for our purposes), the number of measurements m satisfies

display math

then the Dantzig selector obeys

display math
display math

with probability at least 1 − 6/n − 6eβ where math formula, C1 is a positive constant (value unspecified), and xs denotes the vector x with all but its largest s entries set to zero. So we see that the error of the solution is bounded by the error of any sparse solution ( math formula, which goes to zero if x is sparse) and the standard deviation of the measurement noise (σ).

3. Discrete Linear Radar Model

[15] In order to use the Dantzig selector to get a compressed sensing solution for radar signals, we first need a discrete linear model describing the radar that meets the sparsity and incoherence requirements. Appropriate models are presented by Herman and Strohmer [2009] and Bajwa et al. [2008] that discretize the radar signal in a joint time delay and frequency space. Although we will use a model that is almost identical to those, the derivation that follows is nevertheless instructive because of what it tells us about how continuously defined targets fit within the model.

[16] We begin with the narrow-band radar equation for (the complex envelope of) the baseband received signal from a fluctuating distributed target [Van Trees, 2001]:

display math

where s(t) is the transmitted baseband modulation signal and a(τλ) is the complex target reflectivity (magnitude and phase) as a function of time τ and delay (range) λ. We have taken the integration interval to be [0, T] to reflect the limited sampling window, and this carries with it the implicit assumption that a(t − λ/2, λ) = 0 outside that range. Whether one considers a(τλ) to be random or deterministic does not matter at present. What is important is that we expect this function to be sparse in both the frequency domain (Fourier transform with respect to τ) and in the delay domain. To ease further derivation, we define a new function

display math

so that the time variable represents the scattering for a signal sent at time τ (which arrives at the target at τ + λ/2) rather than one reaching the target at time τ. This results in

display math

[17] The next step is to begin discretizing the model. We assume that the received signal is sampled at a uniform rate τs, so that y(t) is represented by a complex discrete sequence yq = y(s) with q = 0, …, m where m = T/τs. In addition, we restrict our attention to discrete phase-modulated signals withb bauds and a baud length of τb, so that s(τ) = sk for b ≤ τ < (k + 1)τb for a complex sequence math formula with k = 0, …, b − 1. For ease of notation, let us also infinitely extend the sequence skby letting its value for non-existent indices be zero,sk = 0 for k ≠ 0, …, b − 1. If we assume that the sampling time is an integer multiple of the baud length, then we have τs = b where ris the under-sampling ratio. Note that if the reverse is true and over-sampling by an integer ratio is performed, we can simply duplicate the modulation sequence by the over-sampling ratio and letr = 1. In either case, we know that T/τb = rm. With these assumptions, we can break up the integral in equation (8) as follows:

display math

We're close to the discrete model we want since the integral in this last equation is dependent on two discrete parameters (k and q) and can be represented by a matrix or vector. Unfortunately, we cannot use this matrix/vector as our unknown variable because it would not be sparse with respect to q. The sparsity is in the frequency domain, so we need to introduce a Fourier representation.

[18] Now comes a key observation: s(τ) is nonzero only over 0 ≤ τ < b, and h(τλ) is evaluated over the same values in equation (8). Thus, the result of the model will be the same if we replace h(τλ) with its Fourier series representation on the interval 0 ≤ τ ≤ s, where Δf = 1/s ≤ 1/b is the smallest frequency component. The Fourier series is given by

display math

where

display math
display math

defines the Fourier coefficients. As indicated by the notation in equation (12), one can think of these coefficients as scaled samples from the windowed Fourier transform of h. This formulation will be useful later on. Substituting the Fourier series representation into equation (9) yields

display math

This seems to have made the model more complicated for no benefit: there are now three discrete parameters and an infinite sum to boot. Notice, however, that e2πijq/n as a function of j is periodic with period n.Re-parameterizing the infinite sum leads to

display math

Now the result of the integral is indexed by p and k where p parameterizes the frequency domain (Doppler shift) and k parameterizes the delay domain. Thus we define discrete reflectivity coefficients

display math

and arrive at the discrete linear radar model:

display math

One can think of hp,k as describing the entries of an n × rm reflectivity matrix H. Then the inner sum of equation (16) is almost the inverse discrete Fourier transform of the columns of the reflectivity matrix, as it is only missing a 1/n coefficient. The outer sum represents a convolution in the time delay index between the transmitted signal and reflectivity matrix.

[19] We assume that equation (16)satisfies both of the requirements that permit a compressed sensing solution: sparsity and incoherence. Radar targets, even the range- and Doppler-spread ones, are typically localized enough in the range-Doppler space that the received signal is well-approximated by only a few nonzero coefficientshp,k. We will delve further into this topic later, but it suffices to say that sparsity in hp,k is a good approximation for our cases of interest. Whether these measurements qualify as incoherent depends, of course, on the modulation sequence sq. In practice, we have used this model successfully with binary random sequences (sq ∈ {1, − 1}), Barker codes, and minimum peak sidelobe codes. Alltop sequences were proven to result in incoherent measurements with a similar model [Herman and Strohmer, 2009]. Numerous papers discuss the incoherence properties of convolution or Toeplitz sensing matrices formed from random sequences [Bajwa et al., 2007; Romberg, 2009; Tropp et al., 2006] or chirp sequences [Tropp et al., 2006], a structure seen in the convolution portion of our model, while Candès et al. [2011] discusses sensing with Gabor frames, a feature which arises from the DFT portion of our model. Perfect codes [Lehtinen et al., 2009] are also likely to have good incoherence properties given that they already achieve inversion through application of the matched filter, but no formal analysis has been done. So although we have not proven incoherence for any class of measurements made by this model, we nevertheless expect that many classes of modulation sequences will admit a compressed sensing solution.

4. Analysis of Radar Model

[20] Two issues need to be resolved in order for the model to be useful in practice, including the following.

[21] 1. We have assumed that the Fourier transform of h(τλ) with respect to τ, math formula, is sparsely supported over a relatively small portion of the range-Doppler space; we need to show that this implies sparsity ofhp,k.

[22] 2. Compressed sensing coupled with the model gives a solution for hp,k; we need to know what this allows us to infer about math formula and consequently h(τλ).

[23] We start exploring the relationship between h(τλ) and hp,k by combining equations (12) and (15) using shorthand notation:

display math

where * is the convolution operator and math formula is the Dirac comb with spacing α. We also rewrite math formula as

display math

where math formula represents the Fourier transform with respect to τ and ΠT(τ) is the rect function defined as

display math

Using this notation, the function in brackets in equation (17) can be further simplified by taking the inverse Fourier transform, combining terms, and taking the Fourier transform to arrive at an equivalent representation:

display math

with N = ⌈λ/τs⌉ = ceil(λ/τs), math formula and

display math

From the finite geometric sum formula, the complex exponential representation of sine, and the definition of the sinc function, one can equivalently write gN(f) as

display math

This allows us to finally write the discrete reflectivity coefficients hp,k in terms of the presumed sparse function math formula:

display math

We see that in the Doppler frequency variable p, the coefficients represent samples from the reflectivity function frequency spectrum math formula after it has been “smeared” by gk/r⌋+1(f) through convolution. In the time delay variable k, the coefficients represent an integration of the reflectivity function over the corresponding delay window.

[24] For the doubly spread target of Van Trees [2001], we switch from a deterministic reflectivity function to a probabilistic one and invoke the assumption that math formulais a zero-mean complex Gaussian random variable with autocovariance given by

display math

where E denotes expected value and S(fλ) is the target's scattering function which describes the returned power as a function of frequency and range. The process is zero-mean because the phase of math formula is assumed to follow a uniform distribution that is independent of the magnitude. In practical terms, the main difference between the doubly spread model and the deterministic model is the phase of the returned signal: the former model produces random phases with respect to range, while the latter model produces phases that are a fixed function of range. Which of these is appropriate will depend upon the application, but both fit equally well with compressed sensing. In terms of the reflectivity coefficients, the doubly spread target model results in

display math

with

display math

In this case, the relationship between the coefficients and the actual quantity of interest S(fλ) is even clearer: on average, the coefficients math formula give the total power returned by the target from ranges b ≤ λ < (k + 1)τb and frequencies weighted by G(pΔf − f). Both the deterministic and doubly spread target models lead to similar interpretations for the coefficients with the difference amounting to how the phase of the returned signal adds up.

[25] To illustrate these points and analyze the coefficient sparsity for the sparsest possible target, consider the case of a point target with a reflectivity of A initially at range r and traveling toward the radar with a range rate v. The target's time delay and Doppler frequency shift are given by math formula and math formula respectively, where c denotes the speed of light and f0 is the baseband radar frequency. The appropriate target reflectivity function is

display math

or equivalently

display math

Plugging this into equation (23) shows how the discrete model represents a point target:

display math

In order to look at sparsity, it will be easier to visualize the absolute value of the reflectivity coefficients given by

display math

Example point target reflectivity coefficients given by equation (30) are shown in Figure 1. Notice that if pΔf = ft for some integer p, then all of the coefficients are zero except for image and target sparsity translates directly into coefficient sparsity. However, in general all of the frequency coefficients corresponding to the correct range will be nonzero, and the question then becomes one of degree: are there few enough significant coefficients to say that they are sparse? In practical terms, “significant” means being above the noise level chosen for the compressed sensing reconstruction since smaller values can be taken as zero and the reconstruction error will still be acceptable.

Figure 1.

Reflectivity coefficients representing a point target, given by equation (30) with A = 1 and τs = 1 μs as a function of frequency for six cases. The left column shows the values for a point target with Doppler frequency shift of 100 kHz, while the right column shows values for a Doppler shift of 150 kHz. The first row shows the coefficient values when 10 frequencies are included in the discrete model, the second row shows values for 25 frequencies, and the third row shows values for 50 frequencies. The dotted line in each plot shows the underlying curve from which the coefficients are sampled.

[26] Figure 1 suggests one way of ensuring that the coefficient sparsity emulates the reflectivity function sparsity for point targets and by extension distributed targets: increasing n, the number of frequencies included in the discrete model. As n increases, the discretization effects become more localized and the values image for the point target look more like direct frequency samples from math formula. It is important to note that this is the same problem and solution encountered when relating the discrete Fourier transform of a sampled function to the complete function's Fourier transform. So we conclude that in general with n large enough, sparsity of math formula translates directly to sparsity of hp,k.

[27] Before proceeding to implementing all of this theory, we would be remiss to not point out the connection between our discrete radar model and the matched filter. If one thinks of the model as a linear operator that takes the reflectivity coefficient matrix and produces a measurement vector, then the adjoint operator is described by

display math

Taking r = 1 for the case when the signal is not undersampled, notice that the adjoint operator is exactly the discrete matched filter for the sequence s applied at the frequencies pΔf. Applying the adjoint/matched filter to the model itself yields

display math

where χp′,ks(pk) is the shifted time-frequency autocorrelation function (or ambiguity function) ofs:

display math

The output of the matched filter is a superposition of the model's reflectivity coefficients multiplied by the appropriately shifted time-frequency autocorrelation function. Solving for the reflectivity coefficients is equivalent to inverting the autocorrelation function to recover just the peak of the matched filter. In other words, the reflectivity coefficients describe the same result as the matched filter but with all filtering sidelobes removed. Of course, this “matched” filter is only matched for point targets at the discretization grid points, and interpreting its results when applied to distributed targets entails a similar analysis to the one we just presented for interpreting the reflectivity coefficients. The takeaway is that the applicability of the discrete model to real-world problems is exactly the same as the applicability of the discrete matched filter in those situations.

5. Implementation

[28] With the sparsity and interpretation of the reflectivity coefficients hp,ksettled, we have all the theoretical tools we need to apply the discrete radar model to real data using compressed sensing. In order to actually accomplish this, we employ a variation on the Dantzig selector called the Gauss-Dantzig selector. An estimate is made with the Dantzig selector

display math

where h represents a vectorized form of the reflectivity matrix H and A gives the linear measurements according to the radar model. The locations of the non-zero components are taken from this initial estimate and used to solve the constrained least squares problem

display math

where we have constrained the solution math formulato only allow non-zero entries in the same locations as math formula. The effect of this procedure is to maintain the compressed sensing performance of the Dantzig selector, notably the sparse solution and its robustness to noise, while achieving an unbiased estimate of the non-zero components which the Dantzig selector alone (biased toward zero) does not achieve.

[29] Because of the potentially large dimensions of these convex optimization problems, with h having thousands of elements or more, second-order solution methods are often not feasible. Therefore, it is necessary to pursue first-order gradient-based methods of optimization. Along with these large dimensions comes a need for efficiently computing the linear measurements represented by A. Simply taking the model of equation (16) and converting it to matrix form as y = Ah will not work very well; with a matrix of that size, most computers will run out of memory very quickly. Luckily, the linear operator in this case is highly structured, and we can take advantage of this in the computations if given the opportunity.

[30] As one might imagine, these problems are not unique to our case, so general-use software packages for compressed sensing and other large-scale optimization problems are available. The one we have chosen to use is called TFOCS (Templates for First-Order Conic Solvers) [Becker et al., 2011a], and it was developed to solve problems of the smoothed conic form described by Becker et al. [2011b]. Its benefits are that it is easy to specify the optimization problem and it allows one to provide an efficient implementation of the linear operator A.

[31] The way we achieve this efficient implementation is to break the operator of equation (16) into two steps: applying the Doppler frequency shift and applying the time delay and convolution with the transmitted signal. For the first operation, we are referring to the calculation

display math

This is just an inverse discrete Fourier transform applied to the columns of H and multiplied by a factor of n, so it is readily implemented using the FFT (fast Fourier transform) algorithm. For the second operation, we must implement the function

display math

Although it might be tempting to try to take advantage of the convolutional structure of this sum and once again use the FFT algorithm, this would actually involve performing m convolutions and discarding most of the resulting values because the g term depends on q as well as k.Thus the straightforward approach of performing the sum directly is the correct approach. For both of these operations, one can take advantage of sparse data structures to minimize memory and computation even further. Though these observations are trivial, the difference in computation time between using this efficient implementation and using either a brute-force sum or a giant matrix is certainly not trivial.

[32] One difficulty in using TFOCS is that it actually solves a smoothed version of the optimization problem, which is necessary because both the Dantzig selector and constrained least squares problems are not differentiable everywhere. Thus there is a need to select a value for the smoothing parameter μ which weights the smoothing term of the minimization objective function. If μ is too large, the optimization converges to an incorrect value; if μ is too small, the optimization converges too slowly. Finding the best value requires trial and error, although this may improve in future versions of TFOCS. In our experience with the examples to follow, letting

display math

strikes the balance reasonably well in an automated fashion for each individual measurement vector y.

[33] In total, our compressed sensing radar implementation proceeds as follows. Data is collected using a discrete phase shift pulse waveform that results in sufficiently incoherent measurements, with baud length determining range resolution and sampling rate setting the limit for feasible reconstruction. Processing the data begins with choosing a Doppler shift resolution with the goal of minimizing the frequency step (maximizing n) while ensuring that enough measurements have been collected with respect to the chosen Doppler resolution to invoke the incoherent sampling theorem and guarantee solution accuracy. Knowing the dimensions of the problem, two linear functions are implemented to efficiently calculate the measurements from the reflectivity matrix according to our model. Then the locations of the non-zero entries of the reflectivity matrix are found by using TFOCS to solve the Dantzig selector with smoothing parameterμ given by equation (38). The final solution is reached by solving the sparsity-constrained least squares problem that completes the Gauss-Dantzig selector. The resulting reflectivity matrix is interpreted according toequation (23) or equation (25)to tell us approximately how much signal was returned by each corresponding window in range-Doppler space.

6. Examples

[34] We apply our approach to meteor radar data collected by HPLA systems. Our focus is on authentic data rather than simulations because the latter have already been shown for radar by Herman and Strohmer [2009]. The first example is a particularly strong meteor head echo observed by the Poker Flat Incoherent Scatter Radar (PFISR) on 28 July 2010. The measurements were made at 449.3 MHz using a Barker-13 code with a baud length of 10 microseconds and a sampling period of 5 microseconds. The inter-pulse period was 51.75 ms. It should be stressed that these parameters are not ideal for compressed sensing, as they were chosen with matched filter processing in mind. Nevertheless, the Barker-13 waveform is a discrete phase shift code as required by our model, and it results in measurements that are sufficiently incoherent. Depicted inFigure 2is the SNR (signal-to-noise ratio) of the meteor head as a function of range and pulse time as given by the matched filter (Figure 2a) and compressed sensing (Figure 2c). The compressed sensing solution also yields the SNR as a function of Doppler frequency; this is shown in Figure 2d. In order to arrive at the matched filter result, it is necessary to try multiple filters that have each been frequency shifted by a different amount in order to account for the Doppler shift of the returning signal. The single matched filter result is the one that results in the highest SNR out of all of the shifted filters. This maximum SNR and the corresponding frequency shift are shown in Figure 2b for comparison with the compressed sensing result.

Figure 2.

(a–d) Strong meteor head echo seen by PFISR. SNR is plotted as a function of pulse time and range or Doppler frequency for both the matched filter and compressed sensing decodings.

[35] The first thing to note about this example is that the matched filter and compressed sensing results generally agree, showing approximately the same SNR for the signal at the same locations in range and Doppler frequency shift. Although it is only one example, this gives us confidence that our approach has merit and works on real data sets. Perhaps the second most striking takeaway from these results is the lack of range sidelobes in the compressed sensing solution which are evident in the matched filter solution in Figure 2aas a band of higher-than-noise points centered around the SNR peak of each pulse. In this respect, compressed sensing is competitive with other processing approaches [Damtie et al., 2008; Lehtinen et al., 2009] used to eliminate filtering sidelobes. The third important observation is that the compressed sensing solution associates an independent Doppler frequency spectrum with each range, whereas the matched filter is limited to one frequency spectrum for the entire signal. If the reconstruction resolution is such that a single range- and Doppler-spread target encompasses multiple range gates or Doppler samples, this can tell us whether different portions of the target are moving at different speeds, such as would be possible in a distributed plasma or a multibodied hard target like a helicopter.

[36] Although it is only partially evident from Figures 2b and 2d, the compressed sensing decoding for the head echo describes signal returning from neighboring range gates with the lower range associated with a Doppler shift that is 1 kHz higher than the Doppler shift associated with the higher range. We do not believe this to actually be true for this head echo. Rather we believe that this puzzling decoding results from assuming that the transmitted signal had uniform power when in actuality it experiences drops in power whenever the phase shifts according to the Barker-13 code. These power dips can be emulated in the return signal with destructive interference caused by two targets, which is what we see with the compressed sensing solution. Including these assumed power drops in the code sequence causes the compressed sensing solution to converge on a single target occupying one range gate and Doppler shift. We find this explanation more plausible than a fragmented meteoroid or dense head echo plasma spread over more than 1.5 km in range. Nevertheless, this example shows what the compressed sensing results would be for either of the latter cases. If transmission signal deviations could be eliminated, our technique would provide a means for identifying fragmentation and wide head echo events where a matched filter fails to do so.

[37] Figure 3shows our second example: matched filter and compressed sensing decodings of a meteor head and nonspecular trail detected by the Jicamarca incoherent scatter radar on 1 March 2011. The measurements were collected at 49.92 MHz with a 51-baud minimum peak sidelobe code and a baud length of 1 microsecond, sampling period of 1 microsecond, and inter-pulse period of 1.02 milliseconds. Starting on the left side of both plots, we see that the head echo is captured very cleanly in the compressed sensing solution and agrees with the matched filter as before. Near the end of the head echo is a flare, whereby a large density of plasma is deposited at a particular range and remains visible for longer than normal. The compressed sensing result provides some insight by not agreeing with the matched filter at one point of high SNR in the head echo where the flare occurs. This is because the compressed sensing SNR plot is summed over all frequencies, and at this point there is signal within the same range gate coming from both the flare at zero Doppler shift and the head echo at a positive Doppler shift. As the flare subsides, a nonspecular trail forms and we have a target where the compressed sensing approach begins to break down. Broadly speaking, its SNR agrees with the matched filter SNR, but the compressed sensing solution also exhibits many artifacts that almost surely do not represent true signal. We believe that two effects could contribute to these errors: sparsity bounds on the discrete model's representation of the trail may not be achieved, and the smoothed optimization problem implemented with TFOCS may not be converging to the minimuml1-norm solution because of the decreased target sparsity coupled with our selection of optimization parameters.

Figure 3.

(a, b) Meteor head and nonspecular trail seen by the Jicamarca incoherent scatter radar. SNR is plotted as a function of pulse time and range for the matched filter and compressed sensing decodings.

7. Conclusion

[38] Compressed sensing provides an exciting new approach for processing radar signals and identifying targets. The radar model that we derived provides insight into the relationship between the compressed sensing solution and a continuously defined target's reflectivity and shows that sparsity of the true target translates to sparsity in the model. Our formulation is very similar to the previous models by Herman and Strohmer [2009] and Bajwa et al. [2008] for which compressed sensing has been explored theoretically, lending mathematical support to our procedure. The concept is also similar to the amplitude domain analysis of Vierinen et al. [2008], but our solution provides a higher level of automation that makes general application easier. The efficient implementation that we developed for this procedure allows its use on large data sets, which is an important step to analyzing real-world data. From our two examples of meteor head echoes and a nonspecular trail, we know that the compressed sensing procedure can provide new insight compared to matched filter techniques.

[39] Many of the benefits of compressed sensing are shared by methods like mismatched or inversion filters [Lehtinen et al., 2009; Damtie et al., 2008], lag profile inversion [Virtanen et al., 2008], and scattering amplitude inversion [Vierinen et al., 2008]. Compressed sensing is compelling because it casts inversion in a new framework that places a different constraint on the transmission signal (incoherence) and provides convergence guarantees based on the degree of sparsity, the coherence of the code, and the number of measurements. Compared to the ubiquitous matched filter, our method produces no filtering sidelobes, removes noise, and has a high range and Doppler frequency resolution that is not directly constrained by the sampling rate or pulse length. Provided the target meets sparsity constraints, these features make compressed sensing ideal for detailed imaging of distributed targets and identification of multiple targets that are closely spaced in range and/or range rate. For meteor studies with HPLA radars, these abilities are vital to elucidating the complex processes present in the plasma.

[40] The approach described herein is a first step toward getting interpretable radar results using compressed sensing techniques. Continued application to measuring ionospheric plasma, particularly meteors, will be the focus of our future work. We intend to explore ways to improve the discrete radar model so that it better encompasses the sparsity of complex targets like nonspecular meteor trails. Avenues of improvement include expressing the target in a different basis (wavelets, discrete prolate spheroidal sequences), removing the restriction to discrete bauded codes and allowing arbitrary transmission envelopes, and incorporating the effects of pre-sampling filters. We also intend to investigate relaxing the sparsity requirement to cover either range or frequency so that sensing a wider category of targets is possible. Even though there is much work to be done, the future of radar compressed sensing looks promising indeed.

Acknowledgments

[41] Ryan Volz was supported by the Department of Defense (DoD) through the National Defense Science and Engineering Graduate Fellowship (NDSEG) Program. This material is based upon work supported by the National Science Foundation under grant AGS-1056042.