The nontransparency and severe propagation effects of the terrestrial ionosphere make it impossible for Earth-based instruments to study the universe at low radio frequencies. An exploration of the low-frequency radio window with the resolution and sensitivity essential to meet the scientific objectives will necessarily require a dedicated satellite-based interferometer operating at these frequencies. Such missions have been proposed in the literature for about the past 15 years. Today, the steady and impressive advances in technology and computing resources have brought us to the brink of a quantum jump in the performance and capabilities of such missions, increasing their scientific desirability manyfold. This paper presents the concept design which emerged from a study to investigate the feasibility of a low-frequency satellite-based interferometer operating in the frequency range 0.1–40 MHz titled Preparation for Radio Interferometry in Space. In place of trying to stretch and adapt the existing solutions to the requirements of very low frequency interferometry, this study attempts to tailor a design to meet its specific needs. The salient features of the design are an onboard correlator to reduce the data volumes to be transmitted to the Earth by about 2 orders of magnitude, use of three orthogonal dipoles in place of two to achieve better polarization characteristics, direct digitization of the entire radio frequency band of interest allowing broadband observations, an overlap in the observing frequency range with upcoming ground-based instruments to aid in imaging and calibration, and an all-sky imaging capability. The most constraining bottleneck for the present design is the large intraconstellation telemetry requirement. It is expected that technological solutions to meet this requirement will be found in the near future as other formation-flying missions which share this requirement emerge.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 High-resolution and high-sensitivity low-frequency radio astronomy has been a standing challenge until the present time. On the Earth, the nontransparency of the ionosphere below a few megahertz and its severe propagation effects below a few tens of megahertz have prevented detailed studies of the radio universe at these frequencies. The 1960s and 1970s saw some attempts to investigate the electromagnetic spectrum below 30 MHz using spaceborne radio astronomy experiments on board individual satellites [Alexander, 1971, and references therein]. The last explorations in this series were conducted by the two Radio Astronomy Explorer (RAE) missions in the late 1960s and early 1970s [Herman et al., 1973; Alexander et al., 1975]. These missions were dedicated radio astronomy missions, covered the frequency ranges 0.2–9.2 MHz and 0.025–13.1 MHz, respectively, and had practically identical instrumentation. RAE 1 was placed in a 6000 km, circular, 59° inclination, retrograde orbit around the Earth, and RAE 2 was placed in circular lunar orbit at 1100 km and 59° inclination. These satellites provided resolutions of ∼40° × 60° at ∼9 MHz and 220° × 160° at ∼1 MHz. Even today, it is not possible to achieve significantly better resolution using individual spacecraft.
 A space-based interferometer will be essential to conduct a detailed study of the radio universe below a few tens of megahertz. In this paper, we refer to the frequency range below 30 MHz as the very low frequency (VLF) range. The subject has been discussed in the literature for more than 15 years now. Weiler et al.  proposed a four spacecraft mission with 85 m crossed traveling wave V antennas, similar to the ones used on the RAE spacecraft. Four discrete bands of 50 kHz each were to be covered in the range 1–30 MHz. Basart et al. [1997a, 1997b] suggested a strategic plan for a space-based low-frequency array progressing from spectral analysis on board a single spacecraft to a two element interferometer in Earth orbit, continuing to an interferometer array in Earth or lunar orbit, and culminating in lunar nearside and farside arrays. They considered it necessary to have high-gain antennas and suggested the use of spherical inflatable arrays with many active elements as interferometer elements. Jones et al.  seem to be the first to consider short dipoles to be suitable elements for space interferometers. They proposed a 16 element array on a distant retrograde orbit, covering 0.3–30 MHz with bandwidths of up to 125 kHz. These studies focused on achieving the best possible performance using the technology available to them.
 The enormous advances in technology and the vast increase in the affordability of computing resources have enabled the design of the next generation of radio interferometers, a large number of which are being actively pursued, for instance, the Allen Telescope Array (ATA), the Atacama Large Millimeter Array (ALMA), the Low Frequency Interferometer (LOFAR), the Expanded Very Large Array (EVLA), the Frequency Agile Solar Radiotelescope (FASR), and the Square Kilometre Array (SKA). This work presents the next generation of design for space-based VLF interferometers. An important guiding principle for the study was to come up with a design tailored to the specific needs of VLF interferometry: a design motivated by how one would like an ideal instrument to be, and not one which can necessarily be realized in the immediate future. The limitations imposed by the currently available technology were hence not regarded as hard constraints. We have, however, exercised caution in our endeavors to think beyond the current technological limitations, to not wander off from the realm of the feasible. The design presented is aggressive in the aspects of technology which are progressing rapidly and practically respects the existing constraints from technologies advancing at a slower pace. Judging from present trends, it is expected to become feasible in the near future. The study, titled Preparation for Radio Interferometry in Space (PARIS), was a collaborative effort between the Laboratoire de Physique et Chimie de l'Environnement, the Laboratoire d'Etudes Spatiales et d'Instrumentation en Astrophysique from the Paris-Meudon Observatory, and the Nançay radio astronomy station.
Section 2 briefly summarizes the scientific motivation behind a VLF space interferometer. Section 3 details the aspects of VLF sky and interferometry technique which were regarded as the major design drivers for this study. The concept design is presented in section 4, emphasizing how each of the identified design drivers has been accommodated. Sections 5 and 6 discuss formation-flying and calibration aspects, respectively. Some key aspects of the data analysis strategy are presented in section 7, and the conclusions are presented in section 8.
2. Scientific Objectives
 Far from showing a simple extension of the more energetic phenomena seen at higher frequencies, studies at very low radio frequencies promise new insights into astrophysics and solar physics. The main objective of PARIS is to produce the first ever sensitive high-resolution radio images of the entire sky in the frequency range from ∼300 kHz to ∼30 MHz. Only a spaceborne radio interferometer, free from the corruptions due to the ionosphere and away from the terrestrial radio frequency interference, can deliver the sensitivity and the dynamic range necessary for useful astronomy in this frequency range. Figure 1 clearly shows that PARIS will open up a new window of opportunity in the last unexplored part of the electromagnetic spectrum. The topics discussed in this section highlight only some of the major scientific objectives and potentials of the mission. For a more detailed review, the reader is referred to Weiler , Kassim and Weiler , and Stone et al. .
2.1. Very Low Frequency Astrophysics
 A VLF spaceborne radio interferometer will have a wide-ranging impact on many fields in astrophysics, ranging from studies of discrete extragalactic, galactic, and solar system objects to studies of the diffuse galactic emission and investigations of the interstellar medium (ISM). Some of the expected prominent areas of investigation are as follows.
2.1.1. Determination of Very Low Frequency Spectra
 The measurement of the low-frequency radio spectral behavior will provide a host of information about the energetics and evolution of radio sources, as well as about the physical conditions in the source [Erickson, 2000]. Some of the physical processes involved in emission (e.g., coherent emission) and absorption (e.g., spectral turnover) of radiation are expected to be observable only at very low radio frequencies.
2.1.2. Evolution of Galaxies
 A low-frequency space interferometer will be particularly sensitive to the radio emission from low-energy electrons which have long radiative lifetimes. These old electrons form an excellent probe for studying the history of radio sources and will allow one to detect the fossil radio plasma whose signatures at higher frequencies are much fainter [Enßlin and Krishna, 2001; Reynolds and Begelman, 1997]. These electrons allow one to detect the early epochs of activity of radio galaxies and to trace their evolution in time and will help constrain the models of galactic evolution and of supermassive black holes in the galactic nuclei.
2.1.3. Study of the Exchange of Matter and Energy Between Stars and the Interstellar Medium
 Within our galaxy, the interferometer will observe the background galactic synchrotron emission and the extragalactic objects overlaid with a complex absorption pattern generated by the diffuse ISM and discrete objects like supernova remnants. Absorption from dense clumps of ionized hydrogen and the warm diffuse component of the ionized ISM will be detectable [Dwarakanath, 2000; Reynolds, 1990]. Multifrequency all-sky maps will allow one to do a tomographic study of the distribution of ionized hydrogen. This will be a considerable new source of information for improving the existing model for galactic distribution of free electrons like the one by J. M. Cordes and T. J. W. Lazio (A new model for the galactic distribution of free electrons and its fluctuations, submitted to The Astrophysical Journal, 2002). These maps will also help in determining the origin of the energy content of the ionized hydrogen. In addition to revealing the large-scale structure of the ionized ISM, it may also be possible to probe the small-scale turbulence by examining the propagation effects like an apparent increase in angular size and fluctuation in dispersion measurements toward pulsars [Cordes, 2000]. This will help in our understanding of the injection and turbulent dissipation of energy within the ISM. By imaging the synchrotron emission from the electrons in the shock regions, such a mission will allow one to test the theory that cosmic rays are accelerated in supernova remnant shocks [Duric, 2000].
2.1.4. Discovery of New Phenomena
 An interferometer in this frequency range will provide a 2 order of magnitude improvement in both sensitivity and resolution over the existing observations from individual spacecraft. As with any first exploration of a part of the electromagnetic spectrum, this mission is justifiably expected to discover objects and phenomena not seen at higher frequencies.
2.2. Solar and Space Physics
 PARIS will produce the first ever interferometric very low frequency radio images of the solar corona, solar transients like the coronal mass ejections (CMEs) and the shock fronts associated with consequent interplanetary disturbances. We note that the Solar Imaging Radio Array (SIRA), a mission aimed at solar and space weather scientific objectives, is currently under study. More information about SIRA can be found at http://sira.gsfc.nasa.gov. The key issues in solar/space physics to be addressed by a spaceborne VLF radio interferometer are as follows.
2.2.1. Imaging of Solar Transients and Studying Their Evolution
 Solar transient phenomena such as solar flares, filament eruptions, and CMEs are manifested by distinct types of nonthermal radio emission. The study of the spatial and temporal evolution of this emission is essential to a better understanding of the Sun-Earth connection. The proposed mission will allow us to image and track these solar-induced disturbances, particularly the CMEs, from the vicinity of the Sun all the way to 1 AU. This requires observing frequencies from tens of kilohertz to tens of megahertz, and hence these measurements can only be made from space. The existing low-frequency observations of the Sun are all made using individual spacecraft. Their strength lies in their ability to provide wide-bandwidth and high spectral and temporal resolution dynamic spectra, but because of their poor spatial resolution, these data cannot provide radio images. A high spatial resolution wideband-imaging instrument with continuous spectral coverage will provide information on the morphology and spatial distribution of emission and will be a fitting complement to the dynamic spectrum studies [Dulk, 2000; Gopalswamy, 2000]. Images of transient radio bursts are also of prime importance for space weather forecasting applications. It is, perhaps, worth noting that the Solar Terrestrial Relations Observatory (STEREO) mission, scheduled for launch in early 2006, will track the centroids of radio emission and will shed some light on the pattern of the emission but will not have true imaging capabilities.
2.2.2. Mapping of Large-Scale Interplanetary Magnetic Field Topology and Interplanetary Density Structures
 The mission will also provide the means for remote sensing of the coronal and interplanetary density and magnetic field structures in the inner heliosphere. For instance, the electron beams producing type III bursts are believed to follow magnetic field lines outward from their source in the corona. Imaging and tracking these bursts over a range of frequencies will help unravel the topology of interplanetary magnetic field lines [White et al., 2003].
2.2.3. Improving Our Understanding of Emission Mechanisms and Particle Acceleration
 The extended dynamic range of such an instrument will make it possible to image both thermal and nonthermal sources simultaneously. Very low frequency images of thermal emission will provide information on coronal holes, streamers, and CME structures at larger heliocentric distances. This information will form a natural complement to and will extend the maps of these structures obtained by white light and X-ray imaging. Images of shock fronts and bursts associated with solar energetic particle events will help improve our understanding of particle acceleration sites and mechanisms [MacDowall et al., 2003].
3. VLF Interferometry
 Although all interferometers share the same mathematical foundations, their practical implementations change considerably with the frequency range of interest. Every few orders of magnitude in wavelength, the problem of designing an interferometer changes in character and essentially evolves into a different problem, with new and different aspects becoming the design drivers. For instance, the detailed designs of an optical interferometer and a high-frequency radio interferometer do not have much in common, even though they implement the same functional blocks. The VLF range (say, 0.1–30 MHz) differs from the usual domain of radio astronomy (say, 0.1–30 GHz) by 3 orders of magnitude. It is reasonable to expect the design considerations for a VLF interferometer to differ from those for interferometers at much higher radio frequencies. Some of these considerations stem from natural causes and others from technological aspects. In this section we identify the key aspects of VLF interferometry which were considered to be the major design drivers.
3.1. VLF Sky
 The most conspicuous feature of the VLF sky is the enormously strong galactic background radiation. Figure 2 shows both the brightness temperature and the specific intensity in the frequency range of interest, as determined by Cane . The specific intensity of polar galactic background is ∼6 × 105 Jy sr−1 (1 Jansky (Jy) = 10−26 W m−2 Hz−1) at 30 MHz, peaks at a value of ∼1.0 × 106 Jy sr−1 close to 3 MHz, and reduces to ∼7 × 104 Jy sr−1 by 0.3 MHz. The brightness temperature is in excess of ∼104 K by 30 MHz, comes close to ∼107 K at 3 MHz, and continues to rise more slowly to ∼1.3 × 107 K at 0.3 MHz. To put things in perspective, the polar galactic background brightness temperature at 408 MHz is ∼20 K [Haslam et al., 1982] and reduces to ∼42 mK at 3.8 GHz [Platania et al., 1998].
3.2. Large Field of View
 The directivity, or field of view (FOV), of a receptor for electromagnetic waves is characterized by λ/d, where λ is the wavelength of observation and d is the dimensions of the receptor. The Very Large Array (VLA), one of the most successful radio interferometers, has antennas of 25 m diameter and makes about 90% of its observations in the wavelength range from 0.224 to 0.0125 m (1.34–24 GHz) (B. Clark, personal communication, 2004). Measured in units of λ, the diameter of the antennas ranges from ∼112 at the low-frequency end to ∼2000 at the high end, leading to FOVs smaller than a hundredth to a thousandth of a radian across. In the VLF band, the λ ranges from 1000 m at 0.3 MHz to 10 m at 30 MHz. The large wavelengths and the necessity to deploy the structure in space preclude the possibility of limiting the field of view of the receptors by using apertures many λ in size, at least in the near future. The receptor size is expected to be smaller than the wavelength of operation for practically the entire frequency range. The FOV of individual receptors is hence expected to be very large.
 At high frequencies, Tsys, the equivalent noise temperature corresponding to the sum of all the contributions to the signal received at the output of an interferometer element, is dominated by Trec, the noise contribution of the receiver electronics. At VLF frequencies, the intense galactic background and the very large FOV imply that Tsys will necessarily be very large and will be dominated by the contribution of the galactic background (Figure 2).
3.3. Three-Dimensional Sampling
 At a given observing frequency ν, the aim of interferometric imaging is to invert the set of measured visibilities Vν(u, v, w) to arrive at the brightness distribution on the sky Iν(l, m). For a phase-tracking interferometer with a small fractional bandwidth (Δν/ν), the two are related by the following expression which gives the response to spatially incoherent radiation from the far field,
where u, v, and w are the orthogonal components of the baselines. They are measured in units of λ and form a right-handed coordinate system such that u and v are measured in a plane perpendicular to the direction of the phase center, u pointing to the local east and v to the local north. Here l and m are direction cosines measured with respect to the u-v-w coordinate system, and A(l, m) is the antenna beam pattern. If the third term in the exponential can be ignored, then equation (1) reduces to an exact two-dimensional (2-D) Fourier transform relationship [Perley, 1999]. This can be achieved by limiting the FOV to a narrow enough angular region, by either building an aperture large enough to achieve a sufficiently narrow antenna primary beam or choosing to map a sufficiently small part of the antenna primary beam. Most of the existing interferometers above ∼1 GHz operate in this regime, referred to as the small FOV approximation. Some wide-field imaging techniques developed comparatively recently are in use, especially at the lower frequencies. In their present form, these approaches tend to decompose the problem of making a single large image into that of making many small images, each satisfying the small FOV approximation. Because of the very large FOVs in the VLF regime, solutions based on this approach will fall short of the needs, and the full 3-D formalism will need to be employed for the inversion of visibility data. Recently, T. J. Cornwell et al. (W projection: A new algorithm for non-coplanar baselines, submitted to Astronomy and Astrophysics, 2004, hereinafter referred to as Cornwell et al., submitted manuscript, 2004) have presented a very interesting computationally efficient formulation of the problem of 3-D imaging.
 For most synthesis-imaging instruments in operation, because of the distribution of the antennas on a near-planar surface and small FOVs, it usually suffices to decompose the baseline vectors into the u and the v components. For a spaceborne VLF interferometer with a 3-D distribution of elements and very large FOVs, it will be necessary to decompose the baselines along the u, v, and w axes. Analogous to conventional ground-based synthesis imaging, where the fidelity of the final image depends on the completeness of the sampling of the u-v plane, for a VLF space array, it will depend on the completeness with which the u-v-w volume is sampled. The constellation configuration chosen for a VLF interferometer must therefore try to achieve a good sampling of the 3-D u-v-w volume, as opposed to the 2-D u-v plane as for most other aperture synthesis interferometers.
3.4. Mapping the Entire Field of View
Equation (1) truly holds only for a monochromatic interferometer. All practical instruments measure visibilities over a finite bandwidth Δν, centered at some frequency ν0, and the data within Δν are treated as if they were at ν0. This imprecision in handling the data leads to a gradual decrease in coherence of the signal measured at two elements with increase in Δν, the distance of the source from the phase center, and the baseline length. For a point source, the effect of fractional bandwidth and baseline length on the reduction in peak response (I/I0, where I0 is the peak response) can be conveniently parameterized in terms of a dimensionless parameter β [Bridle and Schwab, 1999] which can be expressed as
where θ0 is the distance of the point source from the phase center measured in units of the half-power beam widths for a given baseline (∼λ0/D, where λ0 is the wavelength corresponding to ν0 and D is the length of the baseline). Substituting for ν0, β can be expressed as Δν D θ0/c. When β = 1, the peak response decreases to ∼0.8 and further reduces to ∼0.5 when β = 2.
 A necessary requirement for synthesis imaging is that all the measured visibilities used to reconstruct the sky brightness distribution receive coherent emission from the same physical patch of the sky. At higher frequencies this is usually ensured by the FOV restriction imposed by the antenna aperture, which limits the FOV to an angular patch smaller than the angular scale at which visibilities from longest baselines begin to lose significant coherence. In the VLF regime, as discussed in section 3.2, the FOV of the receptors is necessarily very large, and the interferometer baselines are expected to span a wide range, while the bandwidths over which individual visibilities are measured are expected to be a constant. This leads to visibilities from different baselines receiving correlated emission from sky patches centered at a common spot but differing in size. A reasonable solution to the problem of matching the patches from which correlated emission is received is to measure visibilities over sufficiently narrow bandwidths so that even the longest baseline receives correlated flux from the entire FOV.
 At high frequencies, for the most part, the background emission is so weak that sky can be regarded as cold, and the number density of radio sources is such that the sky can be considered to be largely empty. Hence the common practice of not imaging the entire FOV but only small parts of it from which the emission is expected is not only acceptable but also prudent. At VLF frequencies, the sky background is extremely intense. This implies that in order to obtain the best image fidelity and dynamic range performance, it would be necessary to image the entire region from which correlated flux is received, which, in the present case, corresponds to the entire FOV.
 Mapping a large primary beam also requires caution to be exercised on another front. The geometric delay τg suffered by a signal arriving from different directions changes widely, from 0 s perpendicular to the baseline to D/c s along the baseline. The process of cross correlation, however, allows for correction of only a unique geometric delay, τg′. If the residual geometric delay τg − τg′ exceeds 1/νch for some directions, where 1/νch is the coherence time of a band-limited signal of bandwidth νch, the signal received from these directions will lose correlation. This effectively limits the field of view which can be mapped by the interferometer. Such a situation can be avoided by correlating the signal over sufficiently narrow channel widths so that the inequality mentioned above is never satisfied. As an example, a baseline length of 100 km requires that the channel width be ≤3.0 kHz.
3.5. Telemetry Considerations
 The telemetry bandwidth available to the VLF interferometer to transmit the data to the Earth is an important design driver. The Astronomical Low-Frequency Array (ALFA) study [Jones et al., 2000] planned to use a sustained 8 Mbits s−1 telemetry bandwidth from an array of spacecraft located 106 km from the Earth, using the existing 11 m subnet of the NASA Deep Space Network. On the basis of their study, we expect bandwidths in the range 5–10 Mbits s−1 to be available in the near future to a VLF interferometer.
 All the earlier proposals for space-based VLF interferometers chose to transmit the Nyquist sampled time series from every receiver on each of the spacecraft to the Earth. This classical approach offers the advantages of simpler spacecraft architectures and a homogeneous array which provides ample redundancy. The principal disadvantage, however, is that because of the very voluminous nature of the data these designs can only provide rather limited observation bandwidths. The ALFA proposal, for instance, provided a maximum bandwidth of 125 kHz, using an 8 Mbits s−1 telemetry downlink and a 1 bit quantization for the data stream [Jones et al., 2000].
 In order to preserve the expected dynamic range of ∼70 dB in the input signal [Bougeret, 1996], a 12 bit sampling is required. This large number of bits required per sample further reduces the available bandwidth of observation by close to an order of magnitude. There is no doubt that a larger bandwidth of observation than what the classical approach can provide in the near future would be very desirable. There are only two ways in which this can be achieved, either by simply waiting for the telemetry technology to progress sufficiently to meet the needs of VLF interferometry or by doing some onboard data processing to reduce the data volumes to be transmitted to the Earth. For an interferometer, the data products which can be meaningfully averaged are the visibilities. For most ground-based instruments, the process of computing visibilities is performed by dedicated custom-made hardware of significant complexity. The hardware capabilities required for onboard computation of visibilities have remained significantly beyond the state of the art until recently. Therefore, in spite of the low bandwidth disadvantage, transmitting the Nyquist sampled time series had been the only feasible option for the earlier VLF space interferometer designs.
 Judging from the current trends in industry, the computational capabilities of space-qualified hardware are expected to increase at a rate considerably larger than that at which the telemetry bandwidths are expected to grow. We therefore consider it judicious to assess the option of onboard visibility computation and subsequent time and frequency averaging to reduce the data volume and to increase the available bandwidth of observation.
3.6. Propagation Effects
 At the VLF frequencies, the inhomogeneous, turbulent, and magnetized plasma of ISM and interplanetary medium (IPM) act like media with refractive index fluctuations in both space and time. The propagation of the VLF radiation from distant radio objects through this medium modifies the incident wave fronts in a considerable manner. The implications of the most relevant of these propagation effects are enumerated below.
3.6.1. Angular Broadening
 The passage of VLF radiation through ISM and IPM results in significant apparent angular broadening of compact sources [Rickett and Coles, 2000]. This angular broadening effectively imposes a limit on the finest resolution with which one can expect to study the universe, irrespective of the baseline lengths involved. The angular broadening scales as the square of the wavelength of the radiation and is hence most severe at the VLF range. Figure 3 shows the angular broadening of a point source due to IPM and ISM as a function of frequency. The resolution afforded by baselines of 50–100 km is also shown to provide a point of reference.
3.6.2. Temporal Broadening
 The scattered rays which lead to angular broadening of compact sources travel through different paths to the observer. This leads to a spread in the travel time of the signal, which results in smearing of transient signals like pulses from a pulsar. Because of the much larger distances involved, interstellar temporal broadening is much more severe (∼5 years at 1 MHz) than interplanetary broadening (0.1 s at 1 MHz) [Woan, 2000]. In practice, this limits the studies which rely on measurements of the arrival time of the radiation, for instance, studies of pulsars and some transient studies.
3.6.3. Depolarization of Radiation
 The magnetized nature of the ISM and the IPM and their large inhomogeneities combined with the fact that the magnitude of Faraday rotation is proportional to λ2 conspire to make this an important effect for the VLF range. The multipath propagation from source to observer, with possibly a different Faraday rotation for each path, presents a fundamental limitation to the detection of a linearly polarized signal.
 Circular polarization, however, is not affected by Faraday rotation, though the intrinsic luminosity of cyclotron processes which produce it is much lower than that of synchrotron processes. A detailed discussion of polarization effects can be found in work by Linfield .
3.6.4. Absorption Effects
 The free-free absorption is expected to render the ionized ISM optically thick at some turnover frequency. This frequency will be a function of both the emission measure of the medium and its electron temperature. The warm ionized medium is expected to turn optically thick for path lengths of ∼2 kpc at 3 MHz. As the galactic disc is ∼1 kpc in thickness, the sky will appear to be foggy in all directions at frequencies of a few megahertz. It will be possible to see out of the galactic plane at higher frequencies, and the appearance of the plane itself will be dominated by the mottling due to the presence of discrete regions of high electron density. The subject is discussed in considerable detail by Dwarakanath .
3.6.5. Reflection, Refraction, and Scattering Close to the Sun
 Because of the large gradient in the electron density in the high solar corona, a variety of unusual reflection and refraction phenomena take place in this region. An increasingly large fraction of the solar corona becomes inaccessible to the radio waves at frequencies below ∼20 MHz as ray paths get reflected back into the IPM. Bracewell and Preston  give an in-depth discussion of this and a few other interesting phenomena. Close to the Sun, there is considerable evidence, from multispacecraft studies of solar bursts below 1 MHz, for the existence of anomalous beaming and large angular scattering [Lecacheux et al., 1989]. We note that of the effects mentioned here, only scatter-broadening effects impact the design of a VLF interferometer; all the others limit the science which can be done at these frequencies independent of the design details.
3.7. Radio Frequency Environment
 A look at the frequency allocations chart for the United States shows that the entire VLF band from ∼9 kHz onward is allocated to specific users, with the exception of 13.36–13.41 and 25.55–25.67 MHz, which are reserved for radio astronomy. The spectral allocation situation is expected to be similar in other parts of the world. Erickson  concluded in his study that the radio frequency interference (RFI) in the near-Earth environment was too strong to allow sensitive VLF interferometric observations. Measurements from the Wind satellite, located ∼2 × 105 km from the Earth, in the frequency range 1–15 MHz reveal that in spite of the geometric dilution and the attenuation offered by the ionosphere, RFI is often 5 dB stronger than the galactic background emission [Kaiser et al., 1996]. A strong correlation between the spectral bands allocated to commercial shortwave stations and the RFI-affected parts of the spectrum was found. However, the 20 kHz–wide spectral channels of onboard instrumentation did not provide sufficient resolution to routinely identify individual broadcast services, which are a few kilohertz wide and are spaced ∼5 kHz apart. It was also evident that commercial shortwave transmission does not account for all the observed RFI. Given the grossly insufficient coverage provided by the protected bands, a VLF radio interferometer will inevitably have to operate in a rather unfriendly RFI environment. It will hence be necessary to develop a RFI-mitigating strategy.
3.8. Nonstationarity of the Sky
 Often, the instantaneous sampling of the u-v plane achieved by an interferometric array falls short of the requirements for a good image of the desired part of the sky and does not provide sufficient sensitivity. Ground-based instruments rely on rotation of the Earth to improve the u-v coverage, and observing for longer durations improves the sensitivity. The spaceborne VLF interferometer will similarly rely on the motion along its orbit and changes in baselines due to relative velocities between the constellation elements to improve its sampling of the u-v-w volume and the sensitivity. Using visibilities collected over a period of time to construct a single map of the sky implicitly assumes time stationarity of the sky over the period of observation. On the timescales of operation of the VLF interferometer mission, the evolution of most astronomical sources is a nonissue, and we may choose to ignore the low-frequency variability, both intrinsic and due to propagation effects. However, the apparent positions of the solar system objects, with respect to the more distant objects, change rapidly. The position of the Sun, for instance, changes by ∼1° per day. This implies that the sky sampled by the interferometer at different epochs corresponds to different realizations of the sky, differing in the location of the solar system objects with respect to the more distant ones. The active Sun, Earth, and Jupiter are among the stronger discrete sources in the VLF range, and their emissions have an elaborate frequency-time structure. In addition, the apparent angular sizes of sources will vary in time as their angular distance from the Sun changes (see section 3.6 and Figure 3).
4. Design Concept
 In order to keep the mission economically feasible, a microsatellite-based approach has been chosen so that all the interferometer elements can be deployed using a single launch vehicle. The interferometer is envisaged to comprise a constellation of about 16 free-floating three axis–stabilized microsatellites. Each member of the constellation will serve as an interferometer element. As mentioned in section 3.5, we investigate the possibility of an onboard correlator. This has the consequence that not all the spacecraft will be identical. One of them will receive the data streams from all the others and will perform onboard digital signal processing (DSP) to reduce the data volumes to be transmitted to the Earth. We refer to this spacecraft as the mother spacecraft. In order to avoid a single point of failure in the design, it will be necessary to equip a few of the satellites, say, three, to take up the role of the mother spacecraft. Some details of a design based on this concept follow, along with brief justifications of the choices made.
4.1. Frequency Coverage
 The mission has been designed to cover the frequency range 0–40 MHz. The frequency range, useful for radio astronomy, is limited on the lower end by the plasma frequency of the IPM, known to be about a few tens of kilohertz at 1 AU. Analogous to the ionospheric effects at low radio frequencies (a few tens of megahertz), close to the IPM plasma frequency, interferometric measurements are so badly corrupted by the propagation effects that they no longer remain useful for studying distant radio sources. It may be possible to conduct meaningful interferometric observations down to ∼0.1 MHz. At the high-frequency end, we expect that by ∼30 MHz, it will be scientifically more rewarding and much more economical to use the more powerful and versatile upcoming ground-based low-frequency instruments like LOFAR. Nonetheless, we strongly advocate an overlap in the frequency ranges covered by the space array and the ground-based instruments and suggest 40 MHz as the upper frequency limit for the space array (Figure 1). The overlap in frequency range will help in calibration and will allow the space array to benefit from the information of the sky obtained by the ground-based array. A reliable and detailed model for the low-frequency sky obtained by the ground-based instruments will provide a firm anchor point from which to bootstrap to lower frequencies. We note that the instrument itself will be designed to work below the plasma frequency of the IPM, and the measurements in this part of the spectrum can serve as local plasma measurements which can be used to serve different scientific objectives. Their discussion lies beyond the scope of this paper.
4.2. Receiving Elements
 As mentioned in section 3.2, for a VLF interferometer, the long wavelengths involved, the constraints of a spaceborne mission, and a microsatellite-based architecture limit the choice of elements to short dipole antennas. It is very difficult to provide optimal impedance matching for short dipoles over large bandwidths. Most of the power incident on them is hence rejected, rendering them unsuitable for most radio astronomy applications. However, the intense galactic background emission in the VLF range and the enormously wide primary beam of the short dipole ensure that the noise on the measured signal is dominated by that due to the galactic background emission and not the receiver noise [Manning, 2000].
 The dipoles could be based on the monopole stacer design used for the WAVES instrument on board the Wind spacecraft [Bougeret et al., 1995] or the ones designed for STEREO WAVES (SWAVES) on board STEREO. These antennas are 6 m in length, and the noise contribution of the antenna itself is less than that from the galactic background in a frequency range from ∼400 kHz to ∼40 MHz, providing a good match to the needs of a VLF interferometer. The choice of the interferometer element specifies the FOV or the primary beam size for the interferometer, and for a short dipole it is ∼8π/3 sr or ∼27.5 × 103 deg2. Each spacecraft will be equipped with three mutually orthogonal short dipoles in order to record all the information in the electromagnetic field incident on the spacecraft.
 The use of three mutually orthogonal dipoles offers some advantages over the conventional use of two mutually orthogonal ones: Computing all nine (3 × 3) cross correlations per baseline allows one to construct Stokes parameters to characterize the polarization of radiation received from any arbitrary direction [Carozzi et al., 2000], as opposed to being limited to directions close to the perpendicular to the plane defined by the two dipoles; being equipped with the additional ability to compute all nine autocorrelations, the use of three orthogonal dipoles permits individual spacecraft to be used for direction finding of polarized sources [Ladreiter et al., 1995], a potentially useful feature for initial deployment of the constellation and for calibration; and finally, the use of independent data from a third dipole can be considered as an increase in the effective collecting area or an effective reduction in observation time needed to achieve a given sensitivity.
4.3. Signal Path and Onboard Digital Signal Processing
 The signal from each of the three short dipoles on every constellation element is fed via a low-noise amplifier to an analog to digital converter (ADC). The ADC Nyquist samples the signal at 80 MHz to cover the entire radio frequency (RF) range of interest. The input signal must be sampled with sufficient bit depth to preserve its fidelity. Bougeret  suggested that for a short dipole, the dynamic range of the input signal is expected to be 60–70 dB. We aim for a 70 dB dynamic range, which requires sampling using 12 effective bits.
 The primary guiding principle for the onboard digital signal processing (DSP) approach is to distribute it to the largest extent possible, in order to avoid a buildup of DSP requirements at some later stage in the signal chain. The three digitized time series on each of the spacecraft will be Fourier transformed in real time. The spectral width of the frequency channels is determined by the length of the longest baseline and the requirement of imaging the entire primary beam (section 3.4). A maximum baseline of ∼100 km (section 3.6 and Figure 3) and a requirement that the reduction in peak response due to decorrelation loss be <20% (β < 1; see section 3.4) for near-4π sr fields of view lead to a bandwidth of ∼1 kHz for the width of the spectral channels.
 The DSP required to achieve this spectral resolution can be implemented as a two stage fast Fourier transform (FFT) engine. As an illustration, the first stage takes a 512 point real transform of the 40 MHz–wide signal and yields a 256 point complex spectra with a spectral resolution of 156 kHz. A second stage performs a 128 point complex Fourier transform on a subset of these 156 kHz spectral channels, leading to 1.22 kHz–wide spectral channels. The number of channels on which the second-stage FFT is performed will depend on computing power available on board and the intraconstellation telemetry bandwidth limitations. For an 80 MHz sampling, performing the second-stage FFT on 25% of the spectral channels delivered by the first stage of FFT provides an RF bandwidth of 10 MHz and leads to a requirement of ∼1 × 109 complex multiplications and additions per second (CMACS) per polarization per spacecraft. As intraconstellation telemetry bandwidth is expected to be the leading bottleneck, every attempt is made for its most judicious utilization. A selected set of the 12 bit channels from the second stage FFT will be resampled using 1 (or at most 2) bits before transmission to the mother spacecraft. The fraction of channels from the second FFT stage which are finally transmitted to the mother spacecraft and the number of bits used for quantization will depend on the available intraconstellation telemetry bandwidth.
 The mother spacecraft will receive the Nyquist sampled spectral data from all the constellation members and will compute the autocorrelations and cross correlations. The resulting visibilities will be averaged over suitable intervals in frequency and time. Assume that 25% of the available 1.22 kHz spectral channels are resampled using 1 or 2 bits and are transmitted to the mother spacecraft. This leads to a requirement of ∼3 × 109 CMACS for the computation of a set of nine (3 × 3) cross correlations and self-correlations for each baseline. Keeping in mind that the operations on the individual constellation members are done on 12 bit data and most of those on the mother spacecraft are done on 1–2 bit data, the total onboard computing requirements for the mother spacecraft will be about twice as much as those for other constellation members for the 2.5 MHz RF bandwidth provided by the above design. The correlator itself will be a flexible and reconfigurable device. It will allow a range of combinations of spectral and temporal resolutions and the number of baselines for which the correlations are computed, while keeping the total throughput from the correlator a constant and respecting the constraint of available telemetry bandwidth to the Earth. For instance, it will be possible to get higher temporal and/or spectral resolution at the cost of decreasing the bandwidth of observation and/or number of baselines used. It will be desirable for the correlator to have the ability to respond to self-generated and external triggers in order to switch to an appropriate temporal and spectral resolution mode in response to an event. The averaged visibilities will finally be transmitted to the Earth, where the rest of the analysis will take place. The most demanding science requirements for time resolution of visibility data, down to a small fraction of a second, will come from studies of intense transients like solar bursts. In addition to the science requirement, the temporal averaging extent of visibilities will also depend on the orbit-dependent maximum duration for which the constellation baselines can be considered unchanged (section 4.7) and the available telemetry bandwidth (section 4.4) and could vary from a fraction of a second to tens of seconds. The spectral averaging extent will similarly depend on the science requirement and the available telemetry bandwidth resources.
 At first sight, the tasks of digitizing a 40 MHz–wide band and Fourier transforming it into kilohertz-wide spectral channels seems like an unlikely task for a microsatellite. The present-day technology, however, comes very close to meeting these requirements. According to its data sheet, the best performing space-qualified 12 bit ADC from Analog Devices available in July 2003, AD9042, can sustain a maximum sampling rate of 41 MHz, has a typical power dissipation of 595 mW, and provides a spurious free dynamic range of 80 dB over 20 MHz. It seems likely that by the time it is required, the available technology will allow the signal to be oversampled in order to recover some of the losses in the digitization process. The QPro Virtex-II 1.5 V family of space-qualified field programmable gate arrays with up to 6 × 106 gates have been available from Xilinx since January 2004. For the example configuration discussed here, just one of these devices per spacecraft will comfortably be able to handle the DSP requirements for all three dipoles. Preliminary studies indicate that it may even be possible to accommodate the additional computing requirements of the mother spacecraft on the same device or may at most require the use of another similar device. Meeting the onboard DSP requirements of a VLF interferometer in the near future does not seem to pose a significant problem.
4.4. Telemetry and Bandwidth of Observation
 Telemetry issues can be subdivided into those relating to intraconstellation telemetry and telemetry from the mother spacecraft to the Earth. We first discuss the former. As mentioned in section 3.5, the limitation on the telemetry bandwidth to the Earth led us to consider the approach of reducing data volume on board. However, it is impossible to reduce the data rates to below Nyquist requirements before the computation of visibilities. As a consequence, all constellation members transmit Nyquist rate data streams to the mother spacecraft. It is instructive to compute this number per megahertz of RF bandwidth. For each constellation member transmitting data to the mother spacecraft, this amounts to 2 × 106 × Ndipoles × nbits = 6 Mbits s−1 MHz−1, with Ndipoles, the number of dipoles per spacecraft, of 3 and nbits, the number of bits per sample, of 1. For a constellation of Ncraft spacecraft, the mother spacecraft receives data from Ncraft − 1 spacecraft simultaneously. For the 16 element constellation under consideration, this implies a rate of 90 Mbits s−1 MHz−1. The telemetry rate grows by an order of magnitude to accommodate the 10 MHz RF bandwidth which the onboard DSP can comfortably deliver. This design therefore has intraconstellation telemetry requirements in the range of 0.1–1 Gbits s−1.
 The telemetry requirements for transmission of data from the mother spacecraft to the Earth depend on the spectral and temporal averaging performed on board. The amount of data to be transmitted to the Earth per second is given by the following expression:
where the first term gives the number of baselines for a constellation comprising Ncraft spacecraft, Ncorr is the number of correlations computed for each baseline, Nbits is the number of bits used to represent each complex visibility, ΔνRF is the RF bandwidth for which visibilities are computed, νch is the width of each of the spectral channels, τint is the integration time, and Nchavg the number of spectral channels which are averaged over. For a ΔνRF of 1 MHz, using Ncraft of 16, Ncorr of 9, Nbits of 16, and νch of 1.22 kHz and averaging over 10 spectral channels for 10 s leads to a data rate of 142.8 kbits s−1. For a 10 MHz RF bandwidth, it grows to a modest 1.43 Mbits s−1, and the entire RF bandwidth accessible to the instrument, 40 MHz, requires only 5.71 Mbits s−1. The vast reduction in data volume achieved by onboard data processing is apparent on comparing the flow rates in and out of the mother spacecraft or by recalling that the ALFA design required 8 Mbits s−1 of telemetry for a RF bandwidth of 125 kHz. The huge reduction of telemetry requirements to the Earth offers another benefit as well. The fact that the data rate from earlier mission designs was essentially limited by available telemetry implied that the observations could be made only for the duration of the telemetry downlink, requiring a 24 hour downlink for continuous observations. With the large reduction in the telemetry requirements, it might now become possible to reduce the downlink duty cycle without compromising the observing duration.
 The RF bandwidth over which the visibilities are finally computed will be determined by the most limiting bottleneck in the data path. In view of the large intraconstellation telemetry requirements of this design, we expect it to be the most limiting resource. In order to make the most judicious use of this scarce resource, the 12 bit spectra available at individual spacecraft are resampled using 1 bit before transmission to the mother spacecraft, as mentioned in section 4.3.
 The final sensitivity achieved by the design will depend on the bandwidth available for intraconstellation telemetry. The design provides a theoretical point source sensitivity of 5.6 and 2.0 Jy at 3 and 30 MHz, respectively, for 1 MHz of bandwidth and 1 min time integration.
 It can be argued that this design simply moves the telemetry bottleneck from the constellation-Earth telemetry segment to the intraconstellation telemetry segment. This design, however, does reduce the distance over which high-bandwidth telemetry is required from the orbital distance of the constellation (∼106 km; see section 4.7) to the dimensions of the constellation (≤100 km; see section 4.5), making it an inherently more manageable problem. The data volume cannot be reduced below Nyquist requirements before the computation of visibilities, except by reducing the RF bandwidth covered. We believe that the primary reason for this bottleneck to exist is that this functionality was never needed until now and not that it is intrinsically a difficult problem to solve. The space industry is now enthusiastically considering formation-flying missions. As more missions involving multiple spacecraft needing to communicate with another and to exchange information in real time come up, this requirement will be addressed, and suitable technological solutions will emerge.
4.5. Constellation Configuration
 The spatial configuration of the constellation needs to be tuned to the needs of VLF interferometry (section 3). It must take into account the nature of the VLF sky, the scientific objectives, and the engineering constraints. For instance, the angular resolution of the constellation will be limited in most directions by the angular broadening due to IPM and ISM beyond baselines of ∼80–100 km (Figure 3); in order to be sensitive to the intense large angular-scale solar and galactic background emission, short baselines ranging from a few λ to a fraction of λ are needed, and the near-isotropic beam and the requirement to simultaneously map the entire FOV require a very good u-v-w coverage. It is not entirely clear if it is more advantageous to aim for a complete and uniform coverage or to compromise on completeness in favor of a Gaussian falloff in the u-v-w coverage density. In this paper, we have chosen to aim for a complete and uniform u-v-w coverage. A promising configuration for achieving this was presented in the ALFA mission proposal [Jones et al., 2000]. The spacecraft were distributed on a spherical surface in a pseudorandom manner while respecting a minimum separation constraint between the nearest neighbors. The u-v-w coverage provided by such a configuration, referred to as an Unwin sphere in the literature, is remarkably uniform and isotropic in nature and can provide good 4π synthesis imaging capabilities.
 Ideally, the resolution provided by a VLF interferometer should be close to the limit set by scatter broadening in all directions. However, the angular broadening due to scattering by the IPM is a strong function of the angular distance from the Sun for elongations <90°. Close to the Sun, the angular broadening due to the IPM is so large that baselines larger than ∼5–10 km are expected to completely resolve out the solar emission. In the directions far from the Sun, even a 100 km baseline might not be scatter-broadening limited. This introduces a strong anisotropy in the VLF synthesis-imaging resolution requirements.
 Given the limited number of instantaneous baselines, for a mission with significant solar science objectives, it would be desirable to have all of them be sensitive to solar emission. For an Unwin sphere–type of constellation this suggests a maximum diameter of ∼8 km for the sphere, leading to a resolution of 43′ at 3 MHz. (A baseline of 17.2 km provides a resolution of ∼1° at 1 MHz.) In view of the anisotropy in the angular resolution requirements, it is worthwhile to consider the possibility of incorporating a corresponding anisotropy in the Unwin sphere configuration. In order to be able to provide higher angular resolution in other directions while keeping all the projected baselines, as seen from the Sun, sufficiently small, we propose to distribute the spacecraft in a quasi-random manner on a prolate spheroid, respecting a similar minimum separation constraint as the Unwin sphere (Figure 4). The corresponding cigar-shaped structure could be ∼80 km in length and ∼8 km across at its center and oriented such that it always points in the direction of the Sun. Rather than regarding the spacecraft as being placed on a geometric surface, it is more appropriate to regard them as being placed within a cigar-shaped shell of finite thickness. Over a period of time, as the constellation goes around the Sun on its orbit, the long baselines which lie along the length of the cigar will sweep through a range of orientations, providing a good coverage of the u-v-w volume needed for high-resolution imaging of the celestial sphere.
 An obvious limitation of this configuration is that all the long baselines lie in the ecliptic plane. The only way to remedy this is by removing some of the spacecraft from the cigar and deploying them such that they provide long baselines perpendicular to the ecliptic plane. Removing even a few spacecraft from the cigar reduces the number of useful baselines for solar observations considerably. Given the limited number of interferometer elements available, it is difficult to meet all these demands simultaneously. Another limitation of the configuration is that it excludes the possibility of making high-resolution observations in the direction least affected by the interplanetary scattering, the anti-Sun direction. With this configuration, the highest-resolution observations are always obtained toward directions at elongations of 90°.
 An alternative approach for resolving the conflicting requirements of solar and astronomical imaging is that rather than trying to find less than perfect, compromise solutions for both sets of objectives simultaneously, divide the mission duration between the two and try to meet the requirements of only one of them at a time. An Unwin sphere whose radius slowly increases from, say, ∼5 to ∼80 km over the mission duration is a good solution for this approach. At small diameters, the constellation is capable of fulfilling solar objectives, and as the diameter grows, its ability to do solar science diminishes. The data gathered using the small-diameter phase will, of course, be useful for astrophysical objectives as well, if a satisfactory solution to the problem of nonstationarity of the sky can be found (section 3.8). The gradual increase in the constellation radius will also provide better u-v-w coverage.
 A detailed study examining the imaging characteristics of different configurations and their compatibility with different scientific objectives is needed to arrive at a suitable constellation configuration for a VLF interferometer. As has been mentioned in section 3.4, the width of the spectral channel is related to the maximum baseline in the configuration. The current choice of ∼1 kHz–wide spectral channels allows for maximum baseline lengths of ∼100 km, beyond which decorrelation losses may be considered significant (section 4.4). If the final configurations are much smaller in size, the spectral widths of the frequency channels can be increased, leading to an increased RF bandwidth coverage while using the same telemetry bandwidth between the mother spacecraft and the Earth.
4.6. Tackling the Radio Frequency Interference
 As discussed in section 3.7, RFI will be a dominant issue for near-Earth orbits and will remain an issue to contend with even for the far-Earth orbits. There are two aspects of RFI seen by space-based instruments, which make the problem different in character from the one faced by ground-based instruments.
 1. A space-based instrument benefits from the 1/r2 geometric dilution of the RFI intensity, where r is the distance between the Earth and the spaceborne interferometer.
 2. Unlike being immersed in a sea of RFI, as on Earth, a space-based instrument sees RFI to be coming from a specific direction in the sky, that of the Earth. To provide a point of reference, from a distance of ∼1.0 × 106 km, the Earth will remain unresolved by ∼25 km baselines at 1 MHz.
 It is also important to keep in mind that in spite of practically the entire VLF band being allocated to specific users, the RFI spectral occupancy is much less than 100% because of the frequency separation between transmissions on adjacent allocated frequency channels. The RFI-ridden VLF band is interspersed by regions of relatively clean spectrum which can be utilized for radio astronomy. The kilohertz-wide spectral channels provide a good match to the frequency resolution needed to make use of the parts of the band between adjacent transmissions. We propose the following to mitigate the harmful effects of RFI.
 1. In order to maximize the geometric dilution of RFI, it would be preferable to choose from among all available orbits the ones which place the constellation farthest from the Earth.
 2. The well-defined directional nature of RFI will be exploited to identify and reject parts of the band with strong RFI. The astronomical observing will be periodically interrupted, at a low duty cycle, for RFI detection. The interferometer will sweep through the RF band of interest and will be configured to add the signal coming from the direction of Earth first constructively (in phase) and then destructively (out of phase). RFI is expected to show up as narrowband emission which appears when the signal from Earth is added in phase and drops when it is added out of phase. The differences in the spectra obtained in the two cases will be examined to identify RFI-contaminated frequency channels, which will then be excluded from further processing. Frequency and time integration can be performed to improve the sensitivity of RFI detection, though at the price of a reduction in the time available for astronomical observations. By averaging in time for a minute and over four adjacent frequency channels, we expect to obtain 3σ detection of RFI at the level of 20% of the galactic background, which is quite satisfactory. The frequency with which this exercise is performed should match the timescale at which the RFI environment is expected to change. The data from the WAVES experiment onboard Wind can be useful in this determination [Kaiser et al., 1996]. As this scheme needs to examine the visibilities to identify RFI, it needs to be implemented on the mother spacecraft. For optimal utilization of the intraconstellation telemetry bandwidth resource, a list of spectral channels identified as RFI contaminated will be transmitted to the constellation members. These identified channels will not be transmitted to the mother spacecraft, and the list will be updated every time RFI detection is performed.
 3. While it is possible to identify the relatively strong RFI in short time integrations, it is much tougher to identify weaker RFI and to prevent it from contaminating the data. Fortunately, for a spaceborne array, its highly directional nature ensures that the residual RFI which sneaks in will map on to a specific predictable direction in the sky, the direction of the Earth. This direction will trace out the path followed by the Earth in the sky, as seen from the array. Hence, rather than contaminating the entire map, the RFI signal will stay confined to this locus of the Earth through the sky. As the effect of RFI is localized in the image domain, it suggests that it will be useful to an entirely new class of RFI mitigation techniques, which work in the image domain and can be usefully employed.
4.7. Choice of Orbits
 In order to keep it feasible to maintain the constellation in the desired configuration for the entire duration of the mission, it is necessary to restrict the choice of orbits to those where the differential gravity over the length scales of the constellation size is low. These orbits also allow the visibility data to be averaged for significant durations in time (many tens of seconds), reducing telemetry bandwidth requirements. This becomes possible because the relative positions of the spacecraft, and hence the baselines, evolve only slowly in time. Distant Earth orbits provide comparatively better RFI environments as well (section 4.6). The above considerations argue strongly in favor of distant Earth orbits. On the other hand, distant Earth orbits pose a tougher telemetry problem. Nonetheless, we consider only distant Earth orbits to be suitable for this mission, especially in view of the fact that the proposed design reduces the telemetry-intensive nature of the mission in a very considerable manner. Halo orbits about the L1 Lagrange point, distant retrograde and prograde orbits about the Earth-Moon barycenter at distances of ∼106 km from the Earth, seem to be suitable candidates. The choice of orbits in the vicinity of the L1 Lagrange point implies that the interferometer will always be facing the sunlit side of the Earth. This will lead to a better shielding from the extremely intense terrestrial auroral kilometric radiation [Gallagher and Gurnett, 1979] and will be advantageous for the solar and astronomical scientific objectives.
5. Formation Flying
 The formation-flying requirements relate directly to the wavelength of operation of the interferometer. Working at the longest possible wavelengths, a VLF interferometer has the least demanding formation-flying requirements. This makes a VLF interferometer an ideal choice for testing the formation-flying mission control and management concepts. While infrared space interferometers, like DARWIN [Leger et al., 1996], require the constellation members to be positioned with relative accuracies of the order of a centimeter, it is not necessary at all to fly a VLF interferometer in a rigid predefined configuration. Departures of individual spacecraft from their intended positions, by small fractions of the characteristic length scale of the configuration, do not degrade the interferometer performance in any significant manner. The mission requires the baselines to be calibrated to an accuracy of ∼0.1λ at the smallest wavelength of observation (75 cm at 40 MHz). Being far away from the Earth, it will not be possible for the mission to make use of the existing Global Positioning System (GPS) satellite network. An onboard ranging and direction-finding system will be required. It is not really necessary to know the relative positions of the constellation members to this accuracy in real time. Technically, real-time baseline accuracies only need to be sufficient to ensure that the correlation between the signal received at different spacecraft is not reduced significantly and that the coherence time for a 1 kHz–wide spectral channel (∼10−3 s) permits baseline errors larger than the constellation itself. However, as the measurements needed for the baseline calibration will be made in real time, it is likely that the baselines will be known to the required accuracy practically in real time. As will be discussed in section 6, RFI from Earth will make it appear as a bright point source to the VLF array, making it a promising calibrator for baseline calibration.
 The intraconstellation ranging and direction-finding data are insensitive to an overall rotation of the array, which will need to be determined by independent means. There is also a need to align the relative orientations of constellation members (attitude control) to ensure that the dipoles on different spacecraft are pointed in the same directions. A star tracker unit on board every constellation member will be used to serve both these purposes. As the FOV of dipoles is huge, attitude control of the order of a degree will probably be quite sufficient.
 The aim of any calibration procedure is to estimate the response function of the measuring instrument. In synthesis imaging this has conventionally been done by observing astronomical sources with known properties (position, strength, structure, polarization, etc.).
 This approach relies on the availability of a field of view which is dominated by a single, or at most a few, strong astronomical sources with known properties. The very large fields of view coupled with the intense galactic background make this a rather unlikely situation. However, because of the strong RFI, the Earth will appear as a very strong radio source and is likely to be the most promising calibrator. The use of RFI from Earth for calibration also meshes very well with the scheme for dealing with RFI (section 4.6). It will be useful to also seek engineering solutions to meet the calibration requirements.
 Complex gain of the receivers will be calibrated by periodically injecting a known complex calibration signal into the signal path just after the dipoles and comparing the output from the receiver with the input signal. Comparisons with observations from the ground-based instruments at the higher end of the VLF range and simultaneous observations of bright transients like solar bursts by other space- and ground-based instruments can be used for amplitude calibration. Dulk et al.  have shown that for low-resolution instruments at low frequencies, the galactic background spectrum offers a reliable means for flux-scale calibration. It might be possible to use this technique to independently estimate the gains of individual constellation members. The complex primary beam of the dipoles will have to be measured on the ground before the launch of the mission. Mounting the dipoles on the spacecraft will considerably modify the beam shapes from those of isolated dipoles. The beam shapes must therefore be characterized after the dipoles have been mounted on spacecraft. In-flight calibration of beam shapes, if needed, will require some additional onboard functionality. It can be achieved by radiating a signal of known characteristics from a few of the constellation members and receiving it on the others. This information, when combined with a model for the beam shape, can provide suitable calibration.
 The correlation process must multiply the signals from different spacecraft which correspond to the same wave front from the direction of the phase center. This translates into a requirement for time synchronization between constellation members. In a conventional interferometer, this is achieved by distribution of a phase-locked local oscillator signal to all the antennas. For our case of a direct sampling system, which does not require a local oscillator system, this can be achieved by triggering the ADCs onboard different constellation members on a synchronized clock. The onboard ranging system, used for constellation configuration management and baseline calibration, can be used to distribute this clock. Errors in time synchronization between different elements appear as antenna (spacecraft) based phases in the measured visibilities. A 1 rad tolerance, at the highest operating frequency of 40 MHz, on this phase imposes a time synchronization requirement of 4.0 ns or better. A real-time baseline accuracy of 75 cm should allow a time synchronization accuracy of 2.5 ns, comfortably meeting this requirement. As mentioned in section 4.6, the strong RFI from Earth can be used as a calibrator source. The Earth will appear like a point source to most of the constellation and might be the most promising phase calibrator for the array. It will be worthwhile to also explore engineering solutions for phase calibration.
7. Data Analysis Strategy
 The primary focus of this work is to present a VLF interferometer design optimized to provide visibilities with as little loss of information as possible and to maximize the RF bandwidth of observation. The intent is to make the task of imaging more tractable and to provide data compatible with the scientific objectives. The inversion of the visibilities received on the Earth to arrive at the brightness distribution in the sky will be a nontrivial task and forms an independent area of research. The problem of inverting VLF interferometer visibilities is more involved than the one faced by existing interferometers. This is so primarily because of the near-4π sr FOV (section 3.4), the fact that the nonstationarity of the sky will need to be taken into account (section 3.8), and the very intense galactic background radiation (section 3.1).
 The earlier proposals for VLF interferometers sought, without much success, to reduce the FOV to be mapped by trying to use more directional elements [Weiler et al., 1988; Basart et al., 1997a, 1997b]. A later attempt suggested the use of bandwidth decorrelation effect to limit the region of sky from which correlated radiation is received [Jones et al., 2000]. This is not an entirely satisfactory solution because as was pointed out in section 3.4, this leads different baselines to pick up correlated emission from sky patches of different orientations and sizes.
 These attempts to limit the fields of view to be mapped were driven primarily by the fact that it was inconceivable then to envisage that the computing power required to image near-all-sky fields of view will become available in the foreseeable future. Hence, in spite of being mathematically well formulated, the task of simultaneous all-sky imaging was never considered feasible. We have chosen to assume that on the timescale on which a VLF interferometry mission might materialize, the computing requirements of the mission will not pose an insurmountable challenge. This assumption is based on the recent enormous increase in the computational power available at affordable costs and the continued adherence to Moore's law. This has been the key element in inspiring the many ambitious radio astronomy instruments in various stages of design and implementation today, for instance, ATA, ALMA, LOFAR, EVLA, FASR, and SKA.
 There being no established algorithms for all-sky imaging with a VLF interferometer, it is difficult to present a reliable estimate of the computational burden of the task. To justify our assumption, in the absence of anything more concrete, we use the number of independent resolution elements in the FOV to provide a measure of the computation burden of the task of imaging. At the resolution corresponding to a 50 km baseline, a 4π sr FOV has ∼179 × 106 independent resolution elements at 20 MHz. In more familiar terms of a square image with three pixels across a resolution element, this corresponds to an image with ∼40,000 × 40,000 pixels. To provide some points of comparison, for the SKA, assuming a 1 deg2 FOV at 1.4 GHz and a resolution due to a 3000 km baseline [Jones, 2004], the number of resolution elements per FOV will be ∼83.3 × 109. For LOFAR, currently under construction, at 200 MHz, assuming the resolution due to a 200 km baseline and a FOV 25° across, the number is ∼4.3 × 109. Thus, while the computation burden posed by VLF all-sky imaging is rather considerable, it is much less demanding as compared with some other projects being actively pursued. Because of their enormous data volumes, and more demanding dynamic range requirements, the task of imaging for these instruments will be computationally much more challenging than that for a VLF interferometer.
 We believe that the inevitably very large FOVs at very low frequencies, which provide simultaneous access to practically the entire sky, should now be considered a significant advantage over other wave bands which are restricted to much narrower FOVs. In order to exploit this advantage, it would be required to map the entire wide FOV. The measured visibilities will need to be gridded over the three-dimensional u-v-w space and will lead to a three-dimensional image volume using the full three-dimensional inversion formalism [Perley, 1999].
 A promising approach for wide-field imaging is based on the use of a global sky model (GSM), which is a database of detailed information about all the known sources in the sky down to a given flux level. The imaging process involves subtracting the contribution of all sources described in the GSM from the calibrated visibilities. The residual visibilities and residual images contain the information about the remaining sources. As the data from the mission build up, the GSM will be gradually refined, and the quality of the images from the VLF interferometer will improve. Similar schemes are being proposed for LOFAR, which is currently under construction. An overlap in the frequency range with ground-based arrays will allow the VLF interferometer to benefit from the GSM compiled by the more sensitive and higher-resolution ground-based arrays. This will be a considerable advantage and will provide a firm anchor point from which to bootstrap to lower frequencies. The first maps will be at higher frequencies, and from there one will proceed to lower frequencies as the GSM is extended to lower frequencies.
 As we draw closer to the ambitious next-generation instruments, the realization of the inadequacies of existing methods is growing [Cornwell, 2004], and efforts are being directed to examine new approaches to the problem of wide-field imaging [Lonsdale et al., 2005; Bhatnagar and Cornwell, 2004] and to come up with more efficient implementations of existing ideas (Cornwell et al., submitted manuscript, 2004). This is a very welcome development for the imaging requirement of a VLF interferometer.
 The GSM and the data analysis techniques will need to have the ability to deal with the nonstationarity of the sky (section 3.8). While known strong sources, stationary or not, can easily be subtracted, so long as strict linearity has been maintained, the problem for the VLF interferometer will be complicated by the fact that many of the nonstationary sources have time-variable VLF emission. The Sun will be the strongest nonstationary, time-variable source, and the magnitude of the problem can be diminished by simply resolving out solar emission. This is, however, not compatible with solar science objectives and the small baselines required to capture the intense galactic background.
 Very low frequency radio astronomy is now coming of age. The advances in technology have finally brought us to the brink of opening this last unexplored window in the electromagnetic spectrum, both from ground and from space. Versatile and powerful low-frequency interferometers are expected to commence operation later this decade, pushing low-frequency radio astronomy to its furthest limits achievable from the ground. A space-based array will benefit considerably from the knowledge of the low-frequency sky gained from these ground-based arrays and will form the next logical step of pushing the exploration of the VLF window to its absolute limits. Exploiting the advances in space-qualified technology and the vast increase in computing capabilities, one can now design spaceborne VLF interferometers with capabilities way beyond the earlier proposals. The currently available technology comes very close to meeting the requirements of this new design in most respects. We believe that the returns from this approach will amply justify the short wait for the technology to deliver the needs of a VLF interferometer. This design offers many advantages over the conventional approach. It permits the RF bandwidth covered to be increased by more than an order of magnitude. The simultaneous digitization of the entire RF bandwidth of interest offers the flexibility of distributing the part of it to be processed further in any manner from a few kilohertz to 40 MHz, a considerable advantage for multifrequency synthesis and for simultaneously covering a large spectral window. A flexible and reconfigurable correlator design can be used to set observation parameters to suit different science needs and to evolve the observing strategy as we learn during mission operation. The data it provides are compatible with the scientific objectives of the mission. Another advantage of this design is that the bulky high-gain antennas needed for telemetry to Earth are needed only by the mother spacecraft and not all constellation members.
 A disadvantage of the design is that as it requires a mother spacecraft, the redundancy in the constellation is considerably reduced. To counter this, it will be essential to equip a few spacecraft to take up the role of a mother spacecraft in case of need. This does not seem to pose an insurmountable problem. This design does require a more complex payload than the conventional approach, but the level of complexity is not large in an absolute sense. The intraconstellation telemetry is presently envisaged to be the most constraining bottleneck. This is an area which has received little attention by the space industry until now, simply because none of the existing space missions needed this functionality. Formation-flying missions are now being vigorously pursued by the space industry, and the requirement of intraconstellation telemetry will be shared by many of them. As this requirement becomes better recognized, we expect suitable technological solutions to emerge.
 The steady increase in performance of space-qualified hardware, formation flying, and computing capabilities places the demands of a space-based interferometer within reach in the near future. It is therefore timely to lay down a design for a space low-frequency interferometer, based on realistic expectations for technology available in the near future, and to assess its scientific desirability. The VLF interferometer concept is now mature enough to merit a detailed engineering study.
 The authors acknowledge the numerous wide-ranging, fruitful, and illuminating discussions with Alain Kerdraon, Claude Mercier, and Jean-Louis Bougeret. D.O. acknowledges discussions with Sanjay Bhatnagar, Pramesh Rao, and Brian Corey about the technique of synthesis imaging and thanks Tim Bastian and Stephen White for pointing out the magnitude of scattering and angular broadening close to the Sun and Will Aldrich and Brian Fanous for assistance in estimating the suitability of hardware for the computational needs of the space-based low-frequency radio interferometer. The authors thank Alan Rogers for a critical reading of the manuscript and for suggestions which have improved this work. D.O. was supported by a fellowship from Le Studium for this work. This research has made use of NASA's Astrophysics Data System Abstract Service. This work was done using the GNU/Linux operating system, and it is a pleasure to thank the numerous contributors to this software.