Convective-scale numerical weather prediction (NWP), based on models with a horizontal resolution of order 1 km, is motivated to a large extent by the desire to predict precipitation and winds associated with cumulus convection. As conventional observations are sparse at the convective scale, radar is an important source of information. The first operational NWP systems with kilometre-scale resolution use simple methods to assimilate radar data, such as latent heat nudging (LHN) (Jones and Macpherson, 1997; Macpherson, 2001; Leuenberger and Rossa, 2007; Stephan et al., 2008), but research is ongoing using more sophisticated techniques.
In a perfect model context, simulated observations of convective storms have been assimilated using four-dimensional variational assimilation (4DVAR) (Sun and Crook, 1997, 1998) and Ensemble Kalman Filter (EnKF) methods (Caya et al., 2005). Real observations of an isolated storm have been assimilated using EnKF by Dowell et al. (2004, 2010) and Aksoy et al. (2009, 2010). These studies represent significant progress towards operational systems, but also indicate the difficulty of inserting observed storm cells into the models and suppressing spurious simulated cells.
In general one would expect problems, since the dynamics of convective clouds at small scales strongly violate key assumptions that the methods depend on. In 4DVAR (Talagrand and Courtier, 1983; Bouttier and Rabier, 1998; Bouttier and Kelly, 2001) and Kalman filtering (Kalman, 1960; Kalman and Bucy, 1961), it is assumed that the distribution of background error is unimodal and adequately described by second-order moments through an error covariance matrix, as for a Gaussian distribution. The size of this matrix can be reduced to a manageable number of degrees of freedom by balance assumptions and observations of correlations, or through representation with a small ensemble in EnKF (Evensen, 1994; Houtekamer and Mitchell, 1998, 2001). Furthermore it is assumed that the temporal evolution of the error distribution can be represented by tangent linear dynamics or the evolution of a small ensemble of forecasts. Alternatively, more general methods such as the particle filter (Van Leeuwen, 2009) do not require these assumptions, although they gain generality at the cost of computational efficiency, potentially requiring prohibitively large ensemble sizes (Bengtsson et al., 2008; Bickel etal., 2008; Snyder et al., 2008). These issues are reviewed by Bocquet etal. (2010).
In exploring new data assimilation methods, it is often convenient to complement tests using full atmospheric models with test problems using idealized models. Popular choices include the models of Lorenz (1963, 1995, 2005), which include coupling of fast and slow variables in a low-dimensional dynamical system, or the quasigeostrophic equations (Ehrendorfer and Errico, 2008). However, both of these systems were designed to represent key processes of synoptic-scale dynamics, rather than to capture the particular characteristics of the convective scales that make data assimilation difficult. To make progress, it is necessary to consider the nature of the non-Gaussianity and nonlinearity found in the convecting atmosphere.
The essence of this problem can be seen by considering assimilation of radar reflectivity for convective storms. Radar data have a high spatial resolution comparable to the model resolution, but the field of precipitation particles it observes is highly intermittent. Large areas contain no precipitation at all, and strong gradients over distances comparable to a couple of grid points are common. This results in a highly non-Gaussian forecast error distribution, with long tails associated with displacement errors, where a position error of a few grid points produces an order one error in reflectivity. A consequence of this spatial intermittency of precipitation fields is a lack of spatial correlations, that would reduce the effective number of degrees of freedom. This can be contrasted with the situation on synoptic scales, where dynamical balance introduces correlations in space and between model variables. The problem is a version of the ‘curse of dimensionality’: the number of possible states of the system increases exponentially as the dimensionality of the system grows (Bellman, 1957, 1961).
A second issue is that the typical temporal resolution of radar observations of 5–15 min is coarse in comparison to the model time step, which is determined by the numerical requirement that the model state does not change too much over the interval, and is typically less than a minute. Furthermore, clouds do not appear on radar until large precipitation particles have had time to form, by which time the dynamical circulation of the cloud is well developed. The result is that there is significant error growth between observation times, and indeed well-developed cumulus clouds can appear from one observation time to the next. This lack of temporal correlation between observation times results in an essentially stochastic evolution of the field between observation times, as previously unobserved features suddenly appear as precipitating clouds.
Two potential strategies have been discussed to cope with the lack of spatial and temporal correlations. The first is localization, where the analysis at a given location is only influenced by observations that are close by in space and time (Ott et al., 2004). Patil et al. (2001) showed that the atmosphere often has a local low dimensionality and therefore localization reduces a high-dimensional problem to a set of problems of lower dimension. The second strategy is observation averaging. By averaging the observations over a region in space to create a so-called super-observation (Alpert and Kumar, 2007; Zhang et al., 2009), the intermittency is reduced. Upscaling the observations not only reduces the effective dimensionality of the system by introducing spatial correlations, it also produces more smoothly varying fields, leading to better (more Gaussian) error statistics. The cost of this improvement is that the observations lose detail and may no longer resolve individual convective cells, so that even an analysis that ‘perfectly’ matches the averaged observations is not a perfect analysis when considered at full resolution.
The purpose of this paper is to introduce a minimal model that represents these key features of spatial intermittency and stochastic time evolution. This will provide a simple test bed to examine the performance of various data assimilation methods. The model can be regarded as a minimal version of the stochastic cumulus parametrization scheme of Plant and Craig (2008), which is based on a statistical mechanics theory of convective fluctuations (Craig and Cohen, 2006). The convecting atmosphere is represented by a stochastic birth–death process in space, where cumulus clouds appear at random locations with a certain triggering frequency, and existing clouds disappear with a certain frequency. The result is a field of randomly located clouds (a spatial Poisson process) that have a random lifetime, but with the average density of clouds in space and the average cloud lifetime determined by the birth and death rates.
Two data assimilation methods will be applied to this simple model, in basic and localized forms, and with averaged observations. These are the Ensemble Transform Kalman Filter (ETKF) of Bishop and Toth (1999) and its local version as described by Hunt et al. (2007), and the Sequential Importance Resampling (SIR) particle filter (Van Leeuwen, 2009) and its local version. These two methods were chosen because they make very different approximations while targeting the same posterior distribution and are likely to show different behaviours. The behaviour of the SIR filter should be easy to anticipate since it is expected to respond directly to the effective dimensionality of the system, while the ETKF is being applied well outside of its regime of validity and may not work at all.
The aim of this analysis is not to determine what data assimilation method is best, but rather to shed some light on what new problems are likely to appear when these methods are applied on the convective scale. The simple birth–death process used in this study omits many processes that are important in nature and that may enable the data assimilation algorithms to function more effectively. There is no detailed description of the processes responsible for triggering convective clouds, nor any representation of the coupling of convection to the larger-scale flow. The model could be extended to treat such processes, and would have to be, before any conclusions could be drawn regarding about one method being better than another for real applications. Instead we hope to identify typical patterns of error, using a model that is simple enough that their origins can be traced, and strategies that might help correct the errors may be found.
The organization of the paper is as follows. First, the simple model is introduced, along with the implementation details of the two data assimilation algorithms. The ability of the basic schemes to converge to the correct state for stationary and time-varying cloud fields is then examined in detail for a representative ensemble size. The dependence on ensemble size is then considered, followed by the impact of localization and averaging.
2.1. The stochastic convection model
A simple stochastic model is used to produce a changing number of clouds at a set of n grid points. At each grid point we define an integer number of clouds present. The model only allows for integer states and the convective dynamics is specified as a birth–death process which is defined by the two parameters λ and μ. λ is the probability of a cloud being initiated at a grid point and μ the probability for each existing cloud to disappear. These probabilities do not change during the simulation, and are determined by specifying a mean cloud half-life (hl) and average density of clouds per grid point (ρ) using the relations λ = ρ(1 − 0.51/hl) and μ = 1 − 0.51/hl. In this initial study we assume that the grid points are arranged on a one-dimensional line, but since cloud positions are uncorrelated we could equally have arranged the grid points in a two-dimensional array.
The model is integrated as follows:
(a)Initialize each grid point i by assigning a number of clouds mi by random draw from a Poisson distribution:
(b)Time step at each grid point by
(i)death: remove each of the mi clouds with probability μ;
(ii)birth: add a cloud with probability λ.
One realization of this model is integrated as a truth simulation, and an ensemble of k simulations is used for data assimilation. In the data assimilation ensemble members, assimilation increments are applied at each time step after the birth and death steps. The computation of the increments for each of the two assimilation algorithms will be described below. This is a perfect model scenario in the statistical sense that all ensemble members are governed by the same rules as the truth run, but since the random numbers are different in each member model unpredictability plays a key role. The random numbers are important for the birth–death process, while all the other parameters are identical in all ensemble members.
For the experiments in this paper, the number of grid points is fixed at n = 100, and the birth and death probabilities are chosen to give a mean cloud density of ρ = 0.1 clouds per grid point, thus providing a realistic degree of intermittency. A density of 0.1 means that at approximately 90% of the grid points there is no cloud and at the rest of the grid points there is one cloud or very rarely two or more clouds. A random model state is displayed in Figure 1, where the thin vertical lines indicate locations of clouds. In this case there are only seven clouds in the whole domain and no grid point contains more than one cloud. The birth–death parameters are also chosen to give an average cloud lifetime of hl = 30 time steps, or in some experiments hl = 3000 time steps, as discussed below.
Each time step of the model corresponds to an observation time. An observation is the complete state of the truth run at that instant, i.e. the number of clouds at each grid point, with no added error. Note that this includes points where the number of clouds is zero, so that observations of ‘no cloud’ are also assimilated. Although the observations are not explicitly perturbed, their error characteristics must be specified. Consistent with the dynamics of the stochastic model, it is assumed that the observation errors are independent of location and uncorrelated, so that the observation error covariance matrix is proportional to the identity matrix. The constant of proportionality, corresponding to the (constant) error variance, does not affect the ETKF results, which depend only on the relative weighting of ensemble members. The situation is slightly more complicated for the SIR filter, where the error variance does affect the length of time that a given set of observations have influence. This is discussed in more detail in the following section.
The observation strategy is motivated by the characteristics of network radar observations that provide a spatially complete view of precipitation, but relatively little information about the dynamical variables associated with its evolution. Indeed, for the stochastic dynamics used here, the observations contain no information about which clouds will die out in future or where new clouds will later be initiated. If radar observations were available every 5 min, the mean cloud lifetime of hl = 30 steps would correspond to 2.5 h, making it possible, in principle, for the data assimilation to lock on to an observed cloud before it randomly disappears. For comparison, experiments were also done with a mean cloud lifetime of hl = 3000 observation times, which will be referred to as a stationary cloud field since the mean lifetime is much longer than the duration of the experiments. For the runs with observation averaging, the observations are computed as the total number of clouds in non-overlapping subregions of size 10 grid points. Synthetic observations are computed from the ensemble members in the same way.
2.2. Sequential Importance Resampling Filter
The first data assimilation method to be used is a Sequential Importance Resampling (SIR) particle filter, as representative of methods that are, in principle, appropriate for very nonlinear, non-Gaussian problems.
The implementation of the SIR filter follows Van Leeuwen (2009), and in particular the five steps listed in section 3a of that paper, with probabilistic resampling. In the analysis step at time t, each ensemble member k is assigned a weight, w(t,k), according to
where erms(k) is the root mean square difference between the ensemble member and the truth run, and σ = 0.05 is the square root of the observation error variance. Since the weight of a given ensemble member is obtained by updating its previous value, it continues to be influenced by previous observations. Rearranging the above equation shows that the impact of the observations at a given time decays exponentially at a rate proportional to the ratio erms/σ. We can thus interpret σ as providing a memory time-scale for the weights. Finally, the weights are normalized so that the sum over all ensemble members is equal to one.
The new analysis ensemble is then formed by resampling. New ensemble members are chosen by randomly drawing from the old ensemble, with each member given a probability according to its weight. If one member has a sufficiently high weight, it is possible that all other members will be replaced by its copies. To maintain diversity in the ensemble all members are then perturbed with an additive noise of the form , where a is an amplitude factor set to 0.1 for the global particle filter and 0.25 for the local version, and is a random number drawn from a uniform distribution between −0.5 and 0.5. The local version of the filter is obtained by dividing the domain into equally sized subregions, and performing the analysis and resampling steps described above on each subregion individually.
2.3. Ensemble Transform Kalman Filter
The second data assimilation method is an Ensemble Transform Kalman Filter (ETKF), as an example of methods that depend on linear and Gaussian assumptions. The ETKF and its local version (LETKF) are implemented as described by Hunt etal. (2007), but with one additional step required by the stochastic dynamics. The LETKF decomposes the initial ensemble into a mean and deviations. To construct a new analysis ensemble, an updated ensemble mean is produced, and the updated ensemble members are constructed by adding linear combinations of the deviations. The cloud number acquired in this way will contain non-integer values which must be converted to integers to produce a valid model state. This is done by treating the non-integer part of the cloud number as a probability for a cloud being present, and randomly assigning cloud/no-cloud information according to this probability. No covariance inflation factor is used, except for the simulations with observation averaging, where it was found useful to use a deflation of 0.7 to avoid an excess of spread due to the probabilistic conversion to integer cloud numbers.
In the local version of the filter, the size of the local region used is one grid point, as in Hunt et al. (2007), but only one observation is used for each region. This is contrary to the recommendation of Hunt et al. (2007), who suggest using observations from a region centred on the grid point being updated in order to ensure that the increments at neighbouring grid points vary smoothly. However, concerns about smoothness do not arise for the stochastic birth–death process used here, since it produces fields that are uncorrelated between grid points.
Since we seek to investigate generic behaviours of the two data assimilation methods, rather than make judgements that one is better than the other, no attempt has been made to tune parameters or otherwise optimize the two schemes.
3.1. Convergence for stationary and time-varying cloud fields
We first consider the ability of the two assimilation schemes, in their basic forms, to converge to the observed state. A representative ensemble size of 50 members is used. The RMS error of the individual ensemble members is computed at every time step and the average RMS error of all ensemble members is plotted (not the RMS error of the ensemble mean, except where noted). The spread is defined as the mean squared difference of the ensemble members from the ensemble mean. To reduce the noise level in the figures, the errors are averaged over 100 repetitions of the experiment with different realizations of the stochastic process (400 repetitions in the case of the SIR hl30 experiment). The average error is then scaled so that a value of one corresponds to the RMS average difference between two randomly chosen realizations of the model state.
The thick solid line in Figure 2(a) shows the evolution of the mean error of the SIR filter for the stationary cloud field. The SIR filter converges, although rather slowly, and the decrease in error continues beyond 100 time units, eventually saturating at a value close to zero. The ensemble spread (thick dashed line) shows an initial rapid decrease to a fixed fraction of the error, which is maintained through the rest of the experiment.
One can identify three phases in this process, separated by the vertical lines in Figure 2(a). In the first stage, resampling removes members with no correct clouds, replacing them with perturbed copies of members with a correct cloud (with the parameters used here it is rare to obtain a member with more than one correct cloud in the initial ensemble). By the end of the first phase, the ensemble consists of descendants of only a few members of the initial ensemble. An example realization at the end of the first phase is shown in Figure 1(a), where a set of three clouds (two correct) is seen to be present in about 60% of the members, showing their common parentage. The average error and spread decrease rapidly during this phase as the members with no correct cloud are eliminated.
The second stage in Figure 2(a) is characterized by a slower rate of decrease in error, but a continuing rapid decrease in spread. Since the correct clouds are part of the subset common to most members, differences in error between the members are determined by the number of incorrect clouds. Resampling removes the members with the largest error, which are those with the largest number of clouds. The number of grid points without a cloud in any member increases, corresponding to a rapid drop in spread. By the end of this phase, the filter has essentially collapsed, with spread only being maintained by the stochastic perturbations introduced at each resampling stage. The example state shown in Figure 1(b) shows that nearly all members have a common subset of eight clouds (three in correct locations), and at many locations there are no clouds in any ensemble member. The mean error decreases slowly during this phase, since occasionally a perturbation during the resampling will change the subset of clouds common to all members, producing a member that has an additional correct cloud, or one that does not have an incorrect cloud that all other members have. Descendants of this better member take over the ensemble within a few resampling steps, but since such events are rare the average error decreases slowly.
In the third phase, the mean error and spread both decrease slowly (Figure 2(a)). The rate of creation of new clouds by resampling perturbations approximately balances the tendency to reduce the number of clouds by selectively removing the members with the most clouds, and further changes in error and spread are associated only with occasional changes to the common subset of clouds. Each time a new correct cloud is produced by the resampling perturbations it rapidly spreads to the rest of the ensemble, but since the number of possible locations is large many time steps are required until all the correct cloud locations are found. In the example in Figure 1(c), most members have the correct solution at most locations, with one incorrect cloud and occasional additional clouds at random locations.
The rapid collapse of the SIR filter to include a common subset of clouds in all members is as expected for small ensemble sizes. Subsequently, the filter behaves as a Markov chain Monte Carlo simulation, randomly exploring the state space, but retaining correct information from previous time steps. The effectiveness of this behaviour depends crucially on the strategy for perturbing duplicate members introduced during resampling. In this simple model, the resampling perturbations have been chosen consistently with the stochastic model dynamics, and the filter continues to converge, albeit slowly.
While the SIR filter eventually produces a good analysis for a stationary cloud field, Figure 2(a) shows that the time to converge is long compared to the physically motivated mean cloud lifetime of up to 30 time steps. This suggests that, for this ensemble size, the SIR filter will not be able to track changes in a time-varying cloud field. As seen in the thin line in Figure 2(a), the error initially decays at a rate similar to that for stationary clouds, but reaches a minimum value of about 55% of the error of a random field. The initial behaviour of the filter is similar, with the initial random noise disappearing within a few time steps and a common subset of clouds appearing in the majority of ensemble members. However, the evolution of the common subset is too slow to track changes in the evolving observed state and the error remains large.
Figure 2(b) shows the corresponding results for the ETKF. For the stationary cloud field, the mean error eventually approaches a small value (well beyond the 100 time steps shown in the figure), with spread very similar to the error. Also shown is the error in the ensemble mean, which should represent the ‘best estimate’ of the observed state. Two stages are visible in the figure, distinguished mainly by the behaviour of the ensemble spread. While the error decreases continuously, the spread remains roughly constant for approximately 15 time steps, then decreases together with the error.
As shown for an example realization in Figure 3(a), by the end of the first phase clouds appear at the correct locations in the majority of the ensemble members. However, the variability at other locations, associated with incorrect clouds in random members, remains largely unchanged. Since the incorrect clouds are at different locations in the different ensemble members (in contrast to the common subset in the SIR filter), the error of the ensemble mean (also shown in Figure 2(b)) is smaller than the mean of the errors of the individual ensemble members. This reflects the well-known reduction of RMS error for smoother fields, and does not imply that the ensemble mean is a satisfactory estimate of the observed state, since the presence of low rain rates everywhere in the domain is not consistent with the dynamics of the physical model.
The gradual decrease of spread and error in the second phase is associated with the disappearance of incorrect clouds from more and more members. This process is much slower than the introduction of clouds at the correct locations. The ETKF analysis step takes each ensemble member and adds positive and negative perturbations from other members. Since clouds occupy a small fraction of grid points in each member, there is more chance of introducing a new incorrect cloud than of removing an existing one. The slow convergence is thus a direct result of the non-Gaussianity of the errors once the locations of the observed clouds have been captured by the ensemble.
For time-varying cloud fields (thin line in Figure 2(b)), a significant diversity is retained in the ensemble, but the error saturates at a high level since the filter is not able to remove the noise within the half-life of the clouds.
Although both the SIR filter and ETKF have large errors with a time-varying cloud field, the nature of the errors is quite different. The error in the SIR filter comes primarily from wrongly positioned clouds that are present in almost all ensemble members, while the error in the ETKF is associated with a high level of background noise. This suggests that the optimal method for producing probabilistic predictions from the two ensembles will be different.
3.2. Ensemble size
With an ensemble size of 50, neither data assimilation method is able to converge to the time-varying cloud field with any degree of accuracy. To illustrate how the results change with ensemble size, a final error was computed for each experiment. This was estimated as the error after 500 time steps for experiments with a stationary cloud field and 100 time steps for the 30-time-step lifetime cases, since changes after this time were found to be negligible.
In Figure 4(a) it can be seen that any ensemble size greater than about 10 is sufficient for the SIR filter to converge for stationary clouds, while for the time-varying cloud field the error decreases rapidly with increasing ensemble size up to about 20 members, after which the decrease in error is slow. Except for very small ensemble sizes, the spread is about half the magnitude of the error.
The ETKF collapses (vanishing spread) for small ensembles with a stationary cloud field (Figure 4(b)). The results improve significantly with ensemble size, up to about 40 members, with final error reaching about 20% of the random value and ensemble spread about half the final error. For the time-varying cloud field there is almost no improvement with ensemble size, and even for an ensemble size of 100 the final error is only about 5% better than for an ensemble size of 15.
The behaviour of ensemble spread relative to error shown in Figure 4 is also found in the subsequent experiments; therefore spread will not be plotted on the remaining figures.
Assimilating data in local regions independently can drastically reduce the number of degrees of freedom in the system, potentially improving the performance of small ensembles. As can be seen in Figure 5(a), it has a major effect on the performance of the SIR filter, leading to convergence with even smaller ensemble sizes than achieved by the non-local filter for the stationary cloud field.
A major reduction in final error is found for the time-varying cloud field, with errors less than 20% for ensemble sizes larger than 20. This result is not surprising: on average, the observations have 10 clouds scattered over 100 possible locations, for a total of 10010 possible states. This is vastly larger than the ensemble sizes used here. Localization decomposes the domain into 100 subdomains, each with two likely states (cloud or no-cloud), for a total of about 100 · 2 possibilities. An ensemble of a few tens of members can sample this space within the cloud half-life of 30 time steps.
On the other hand, the effects for the LETKF are modest, with small improvements in final error, particularly for smaller ensemble sizes. The primary source of error for the ETKF is the continual creation of clouds at incorrect locations by the assimilation increments, which is unaffected by localization.
In contrast to localization, observation averaging should have the effect of making the distribution of the errors more Gaussian, potentially leading to better results for the ETKF. Figure 6 shows the final errors for experiments with observation averaging over 10 grid point regions. For this figure RMS errors have been computed from averaged observations and model states, and normalized by the difference between two random states in this measure. An error of zero would thus imply that the number of clouds within each 10-grid-point region was correct, but not necessarily their locations. The errors thus reflect the ability of the methods to solve the simpler problem of producing the correct density of convective clouds over subregions, and are not directly comparable to the errors shown on the previous figures.
With averaged observations the SIR filter converges even with small ensemble sizes (Figure 6(a)). As with localization, the problem is decomposed into a set of smaller problems that are solved independently. Unlike localization, averaging changes the statistical character of the smaller problems, but the reduction in dimensionality is similar.
For a stationary cloud field, the AETKF (Figure 6(b)) shows a large reduction in the ensemble size required to obtain small errors in the region of small ensembles (10–20 members). The increase in error for larger ensemble sizes is surprising, but preliminary experiments (not shown) suggest that this could be corrected by increasing the covariance deflation factor with increasing ensemble size to produce errors at least as small as those for smaller ensembles. Even for a time-varying cloud field the performance of the ETKF is significantly improved by averaging, although for the relatively small averaging region used here the errors remain large.
The aim of this work was to test convective-scale data assimilation algorithms using a simple model based on a stochastic birth–death process. The stochastic dynamics models the extreme nonlinearity of the convecting atmosphere, where a storm can appear from one observation time to the next. The spatial Poisson distribution in the simple model represents the intermittency and lack of correlation characteristic of remote sensing observations of convective clouds, where the space and time-scales of changes in the convection are comparable to the resolution of the observations. Since the model does not include any dynamical balances that might or might not be present at the convective scale, it constitutes an extreme test for data assimilation algorithms, and both the SIR filter and ETKF fail to produce good results for realistic parameter values of cloud density and lifetime. The performance could undoubtedly be improved somewhat by optimizing parameters such as the covariance inflation or the resampling probability in the particle filter, but the results would be unlikely to change qualitatively. Although the size of the ensembles tested here is comparable to the number of grid points in the model (n = 100), they are small in comparison to the number of possible states of the system (roughly 2n ≈ 1030), so the results are not likely to be improved by larger ensemble size unless it becomes very large indeed.
The SIR filter rapidly collapses to a state in which variance is only maintained by the perturbations associated with resampling after members are eliminated. Interestingly, the filter can eventually converge to the observed state, since any correct cloud locations found by the random perturbations are retained in the ensemble. Since the rate of convergence is controlled by the resampling perturbations, rather than the importance weighting, strategies like an improved proposal distribution (Van Leeuwen, 2009) may have the greatest potential to improve filter performance. In any case, the true statistical properties of the collapsed ensemble need to be taken into account in generating probabilistic forecast products, since the weights of the ensemble members produced by the filter provide little information. Localization and observation averaging both produced dramatic improvements in the performance of the SIR filter, since they both drastically reduce the dimensionality of the space that must be explored by the resampling perturbations. In more realistic models, however, these methods might cause problems by violating dynamical balances that couple different spatial regions.
The ETKF collapses for small ensemble sizes, but otherwise captures the correct cloud locations quickly. However, the ensemble is plagued by large numbers of incorrect clouds that contaminate the ensemble mean. This occurs because negative clouds are not possible, so that the nonlinear dynamics rectify the analysis increments, producing a non-Gaussian distribution of variability in the ensemble. Averaging of observations has the potential to correct this problem, since the Poisson distribution will converge to Gaussian as the averaging region becomes large enough to contain many clouds. Indeed, significant improvement with averaging was found, although for the relatively small averaging region used here the errors remain large. Localization, on the other hand, seems to have no benefit for the ETKF in this environment.
Caution is necessary in extrapolating the results of this simple model to operational data assimilation in convective-scale numerical weather prediction. The model proposed here represents extreme non-Gaussianity and nonlinearity. It is not a representative model of the range of physical processes that occur in high-resolution data assimilation, but rather a model of the particular additional processes that distinguish the convective-scale problem from the better-understood synoptic-scale problem. The performance of a particular method in a particular weather regime will be influenced, and often dominated, by processes other than those described in the simple stochastic model. However, the types of errors that show up in the idealized model are likely to appear, at least in some situations, in the full problem. The simple model should provide some insight into which modifications or improvements are likely to help ameliorate these errors, and are thus worthy of investigation in the full system.
The idealized model proposed here could be a starting point for a hierarchy of models, where a stochastic process representing convection is introduced into simple dynamical models (e.g. shallow water, or quasi-geostrophic), and the convection is coupled to the large-scale dynamics by a simple closure assumption, where the rate parameter depends on the large-scale fields. Conversely, the simplicity of the current model lends itself to a more formal mathematical analysis. There is a extensive literature on spatial birth–death processes (e.g. Cox and Isham, 2000), including extensions to nearest-neighbour interactions (Møller and Sørenson, 1994).
This research was carried out in the Hans Ertel Centre for Weather Research. This research network of universities, research institutes and the Deutscher Wetterdienst is funded by the BMVBS (Federal Ministry of Transport, Building and Urban Development).