SEARCH

SEARCH BY CITATION

Keywords:

  • sequential data assimilation;
  • variational data assimilation;
  • air quality forecast

Abstract

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

[1] The objective of this paper is to evaluate the performances of different data assimilation schemes with the aim of designing suitable assimilation algorithms for short-range ozone forecasts in realistic applications. The underlying atmospheric chemistry-transport models are stiff but stable systems with high uncertainties (e.g., over 20% for ozone daily peaks (Hanna et al., 1998; Mallet and Sportisse, 2006b), and much more for other pollutants like aerosols). Therefore the main difficulty of the ozone data assimilation problem is how to account for the strong model uncertainties. In this paper, the model uncertainties are either parameterized with homogeneous error correlations of the model state or estimated by perturbing some sources of the uncertainties, for example, the model uncertain parameters. Four assimilation methods have been considered, namely, optimal interpolation, reduced-rank square root Kalman filter, ensemble Kalman filter, and four-dimensional variational assimilation. These assimilation algorithms are compared under the same experimental settings. It is found that the assimilations significantly improve the 1-day ozone forecasts. The comparison results reveal the limitations and the potentials of each assimilation algorithm. In our four-dimensional variational method, the low dependency of model simulations on initial conditions leads to moderate performances. In our sequential methods, the optimal interpolation algorithm has the best performance during assimilation periods. Our ensemble Kalman filter algorithm perturbs the uncertain parameters to approximate model uncertainties and has better forecasts than the optimal interpolation algorithm during prediction periods. This could partially be explained by the low dependency on the uncertainties in initial conditions. The sensitivity analysis on the algorithmic parameters is also conducted for the design of suitable assimilation algorithms for ozone forecasts.

1. Introduction

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

[2] A typical Eulerian atmospheric chemistry-transport model (CTM) computes the concentrations c of a set of chemical species by solving the system of advection-diffusion-reaction equations,

  • equation image

where ci is the concentration of the ith species, V is the wind velocity, ρ is the air density, D is the turbulent diffusion matrix, χi(c, t) stands for the species production and loss due to the chemical reactions, and Ei stands for the elevated emissions. At ground the boundary condition is given by

  • equation image

where n is the upward unitary vector, Si stands for the surface emissions and videp is the dry deposition velocity.

[3] In the numerical model (the CTM), the dimension of the discretized system is usually 106–107. The model computes ozone hourly concentrations over Europe (for instance) given the initial conditions and the input data (also designated herein as parameters).

[4] Data assimilation can be considered as the determination of the initial conditions or of model uncertain parameters by coupling the heterogeneous available information, for example, model simulations, observations, and statistics for errors. Data assimilation methods are roughly catalogued into variational and sequential ones [Le Dimet and Talagrand, 1986; Evensen, 1994]. The objective of the former can be defined as state estimation by minimizing the quadratic discrepancy between model simulation and a block of observations, usually combined with a priori background knowledge. This can be formalized and solved efficiently with the optimal control theory. The sequential methods make use of observations as soon as they are available. Since this is a filtering process, filter theory (linear or nonlinear) applies.

[5] Both methods have found their applications for CTMs. The pioneering work dates back to Fisher and Lary [1995]. On the variational side, Elbern and Schmidt [2001] use a comprehensive model rather than an academical model in order to assimilate real observations with assessment of ozone forecast. Chai et al. [2007] follow with assimilation of new types of observations, and several practical issues, for example, background error modeling, are investigated with details. Very few work deals with the assimilation of initial conditions jointly with uncertain parameters [Elbern et al., 2007]. By contrast, Segers [2002] conducts in-depth studies on the applications of efficient filtering methods, in which emissions, photolysis rates and deposition are considered to be uncertain. The model state as well as uncertain parameters are estimated. Constantinescu et al. [2007b] report the filtering results obtained with perturbations on emissions and on boundary conditions, and with distance constraints on the spatial correlations.

[6] All these efforts are part of the recent diffusion of data assimilation expertise from numerical weather prediction (NWP) to air quality community. For a review see Carmichael et al. [2008]. The CTMs are stiff but stable systems with high uncertainties [Hanna et al., 1998]; the perturbations on initial conditions tend to be smoothed out rather than amplified. Therefore the conclusions from meteorological experiences [Lorenc, 2003; Kalnay et al., 2007] cannot be applied directly.

[7] The objective of this paper is to evaluate different assimilation algorithms for ozone forecasts in the same experimental settings. Hopefully this could serve as a base point for the design of assimilation algorithms suitable for ozone forecasts in realistic applications. Four algorithms, namely optimal interpolation (OI), ensemble Kalman filter (EnKF), reduced-rank square root Kalman filter (RRSQRT) and four-dimensional variational assimilation (4DVar) were implemented.

[8] We note that this comparison study has its limitations in that: (1) Only model state is adjusted and uncertain model parameters remain unchanged. (2) The treatment of uncertainties are different. OI parameterizes aggregate uncertainties using the homogeneous Balgovind correlation function. In 4DVar the uncertainties are taken into account, in a way similar to OI (Balgovind correlation), but only at the initial date of the assimilation period. The underlying model is assumed to be perfect, that is, we consider a strongly constrained 4DVar. By contrast, EnKF and RRSQRT represent model uncertainties with ensemble generated by Monte Carlo samplings of uncertain parameters. The reasons for the first limitation are that (1) the adjoint model with respect to model parameters is not available, and (2) correlations between the model state and parameters are unknown. Clearly this should be a research task in near future. The second limitation stems from the unsettled formulation of model error. A novelty of our EnKF and RRSQRT implementation is the perturbation method, originally employed in uncertainty studies for air quality models [Hanna et al., 2001].

[9] The paper is organized as follows. Section 2 documents the assimilation algorithms and their implementations. The experiment setup concerning the model and observations is detailed in section 3. We report the comparison results in section 4. Therein sensitivity studies with respect to the assimilation algorithm settings are also conducted. Conclusions and discussions can be found in section 5.

2. Assimilation Algorithms

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

[10] We rewrite the CTM dynamical differential equation in discrete form from time tk−1 to tk,

  • equation image

where xt denotes the true state vector of dimension n, Mk−1 corresponds to the (nonlinear) dynamical operator from k − 1 to k, and εf is the model error vector assumed to have a normal distribution with zero mean and covariance matrix Q. In this study, the state is chosen to be the vector of concentrations for the concerned species. For simplicity we drop tk to subindex k, e.g., xkt for xt(tk) and Qk−1 for Q(tk−1). At each time tk, one observes,

  • equation image

where εko is the observation error vector assumed to have a normal distribution with zero mean and covariance matrix R, and Hk is the (possibly nonlinear) operator that maps the state to the observation space at time tk. The vector yk is of size p, and usually pn. The error vectors εk−1f and εko are supposed to be independent.

[11] Let xb be the a priori state estimation (background) with error εb = xbxt of zero mean and covariance matrix B, and let xa be the posterior state estimation (analysis) with error εa = xaxt of covariance matrix A. The data assimilation problem is to determine the optimal analysis xa and its statistics A given background xb, observation y, and the statistical information in error covariance items R and B.

2.1. Optimal Interpolation

[12] Optimal interpolation [Daley, 1991] searches for an optimal linear combination between background and innovation by minimizing the state-estimation variance. The innovation d is the difference between the observation vector and the state vector, i.e., d = yH(xb). Under linearity assumption of the observation operator close to the background, i.e., H(x) − H(xb) = H(xxb) where H is the linearized operator, the estimation formulae are given according to best linear unbiased estimator theory as follows:

  • equation image
  • equation image

[13] In practice, setting the background error covariance remains problematic. In this study B is either diagonal or in Balgovind form. In the latter case, the error covariance between two points is given by

  • equation image

where L is a characteristic length, d is the distance between the two points, and v is the a priori variance [Balgovind et al., 1983].

2.2. Ensemble Kalman Filter

[14] Ensemble Kalman filter [Evensen, 1994, 2003] differs from optimal interpolation in that the error covariance matrix is time-dependent. The assimilation process follows the cycling of two steps of forecast and analysis. At forecast step, the model is applied to the r-member ensemble {xk−1a,(i), i = 1, …, r}, and produces the forecast {xkf,(i), i = 1, …, r}. The forecast error covariance matrix Pf can be approximated by the ensemble statistics. Whenever observations are available, the cycling enters into analysis step, and each ensemble member xkf,(i) is updated to xka,(i) according to the OI formula (5)(6), with background error covariance matrix B replaced by forecast error covariance matrix Pf. Although not necessary in the algorithm, the analysis error covariance matrix Pa can then be approximated with the analyzed-ensemble statistics.

[15] We summarize the algorithm as follows.

[16] 1. First we perform initialization. Given the probability density function (PDF) of the initial concentrations, an ensemble of initial conditions is generated. In our experiments, except for the cycling context in section 4.4 where initial ensemble members are ensemble forecasts from the previous cycle, we skip this step: all members in the ensemble start with the same initial condition. The first integration steps are therefore a spin-up period during which the ensemble spread is essentially increasing as a result of the perturbations on uncertain parameters.

[17] 2. Next is the forecast step,

  • equation image
  • equation image

where equation imagekf is the mean of the forecast ensemble: equation imagekf = equation imageequation imagexkf,(i).

[18] 3. An analysis formula is applied,

  • equation image
  • equation image

where equation imageka is the mean of analysis ensemble {xka,(i), i = 1, ., r}, yk(i) is the observation vector, and the Kalman gain is approximated by

  • equation image

[19] The ensemble initialization and the determination of the model error εk−1f,(i) are entangled problems. In our implementation, we take identical initial samples, and the model error is approximated by perturbing model input data and model parameters,

  • equation image

where d is the vector of parameters to be perturbed, and for ith sample, w(i) is the diagonal matrix whose diagonal elements are perturbation coefficients (see section 2.5). Let ekf,(i) be xkf,(i)equation imagekf, one (approximate) direction of the forecast error, and let Efk be the matrix [ef,(1)kef,(2)kef,(r)k]. By formula (9), we have

  • equation image

In this way, the error covariance matrix is approximated by ensemble statistics.

[20] In the original EnKF algorithm, the observation vector yk(i) is perturbed for consistent analysis statistics [Burgers et al., 1998]. In this paper, we present only the assimilation results without observation perturbations, since the variances of observation errors are in general much smaller than those of model errors. However, in our implementation, the observation perturbation is an option, and preliminary tests showed that, at least for the reference setting in section 3, there are improvements in forecast performance with this option on.

2.3. Reduced-Rank Square Root Kalman Filter

[21] Reduced-rank square root Kalman filter [Heemink et al., 2001] uses a low-rank representation LLT of error covariances matrix P. L = [l1, …, lq] is the mode matrix whose columns (modes) are the dominant directions of the forecast error. The evolution of a mode can be approximated by the differences of the forecasts based on the mean (analyzed) state and its perturbation by this mode, that is,

  • equation image

where xk−1a is given by

  • equation image

[22] The forecast xk−1f is calculated from previous analyzed state by

  • equation image

[23] Assuming that at time tk−1 the error covariance Pk−1a has the square root form Lk−1aLk−1a,T, the propagation of Pk−1a is tractable. The forecast error covariance matrix at time tk is calculated by

  • equation image

where Mk−1 is the tangent linear model, that is the Jacobian matrix of Mk−1, Qk−1equation image is the square root of model error covariance matrix. Considering square root form LkfLkf,T for Pkf, we have the forecast formula for mode matrix Lf,

  • equation image

where Πkf projects equation imagekf onto the q leading eigenvectors of equation imagefkequation imagekf,T using the singular value decomposition. Recall that analysis error covariance matrix Pa can be calculated by (IKH)Pf (IKH)T + KRKT for arbitrary gain K, then rewriting it in square root form we obtain the analysis formula for mode matrix La,

  • equation image

where Πka projects equation imageka onto the q leading eigenvectors of equation imagekaequation imageka,T.

[24] We do not use the tangent linear model, but employ (15) to simulate Mk−1Lk−1a at forecast step. The columns of Qk−1equation image are obtained in the same manner as the model error formula (13) in EnKF. The above treatments make the RRSQRT implementation similar to our variant of EnKF. The difference is that RRSQRT employs square root formulae. In addition, the error covariance is approximated in dominant eigenvectors in RRSQRT whereas EnKF bears no such process.

2.4. Four-Dimensional Variational Algorithm

[25] Four-dimensional variational algorithm [Le Dimet and Talagrand, 1986] finds the optimal initial condition x* by minimizing a cost function

  • equation image

under the constraint xk = M0[RIGHTWARDS ARROW]k(x) = Mk−1(Mk−2 (…M1(M0(x))…)). The assimilation period is from t0 to tN. The gradient for Jo is calculated by the backward integration of the adjoint model (F. Bouttier and P. Courtier, Data assimilation concepts and methods, 1999, http://www.ecmwf.int/newsevents/training/rcourse_notes/DATA_ASSIMILATION/ASSIM_CONCEPTS/Assim_concepts.html), (1) equation imageN = 0 (2) for k = N, …, 1, calculate equation imagek−1 = Mk−1T (equation imagekHkTΔk), where Δk = Rk−1 (ykHk(xk)), and (3) equation image0: = equation image0H0T0) gives the gradient of Jo with respect to x.

[26] Assimilations are performed by model integrations starting from the optimal initial condition x*. Further integrations from time step N based on the analyzed model state provide the predictions. The inverse of B is calculated online or, for high-dimensional model configurations, B−1 can also be approximated using SVD truncations and saved on disk for later computations. The adjoint operator MTk−1 is obtained using the automatic differentiation software O∂yssée [Faure and Papegay, 1998]. The forward model simulations are saved for the backward integrations of the adjoint model. No checkpointing technique is employed in our implementation.

2.5. Uncertainties and Model Error

[27] The corrections of the analysis scheme lie in the subspace spanned by covariance matrix of forecast or background errors, i.e., the space induced by the columns of the square root of the matrix B [Kalnay, 2003]. Unrealistic error structure produces spurious corrections, and probably results in unbalanced physical model state. Therefore the design of the error structure is of great importance.

[28] There are mainly three approaches for error modeling: (1) modeling uncertain sources and then perturbing them in the model [Segers, 2002; Constantinescu et al., 2007a]; (2) using the statistics of model states, for example, NMC method [Chai et al., 2007] and ensemble methods; (3) using parameterizations, for example, Balgovind correlations for background error covariance [Hoelzemann et al., 2001; Elbern et al., 2007]. Each of the three should be flow-dependent, that is, adapting to the “error of the day.” The spatial and temporal heterogeneities of the chemistry-transport problem make the last two approaches difficult issues.

[29] The numerical models are usually assumed unbiased. In our case, we assume that the model uncertainties only result from the misspecification of model parameters. In our EnKF and RRSQRT implementation, the ensemble is generated by the model integrations with perturbed parameters. The uncertainties and the distributions are introduced for model parameters that are mainly bidimensional or tridimensional fields under space coordinates. These parameters are modeled as random vectors. In practice, for a field equation image (a random vector) whose median value is p, a perturbation is applied to the whole field so that every component equation imageki has the prescribed distribution. For instance, for a lognormal distribution, one writes

  • equation image

where γ is sampled according to a standard normal distribution. The quantity γ is independent of the time index k and of the space index i, so that the perturbations increase the ensemble spread. The same sample of γ is used to perturb all values of the field equation image. The quantity equation imageγ is the perturbation coefficient for the corresponding parameter in matrix w(i) in formula (13). For normal distributions, perturbations bigger than certain given quantity (by default two times of the standard deviation) are discarded so that no unrealistic parameters are produced, for instance, the negative emissions.

[30] A delicate point is having temporal and spatial correlations between the different values of the field. With the perturbation applied in (22), the correlation between two field values ln equation imageki and ln equation imagelj is equal to 1. But the uncertainty sources at these two points are not the same; hence the perturbation should be different. A fine modeling of the uncertainty should lead to have γ depending on time and position (producing γki). Such a fine description of uncertainties is mostly beyond available knowledge.

[31] Examples for continental air quality simulations extracted from Hanna et al. [2001] are shown in Table 1. For many fields (associated with α = 2), a confidence interval that includes 95% of the probability density integral is [equation image, 2m] if m is the mean of the field. Uncertainty levels must be adjusted to the simulation scale (domain size and temporal discretization). In particular, uncertainties decrease as data is averaged over a larger domain or over a longer period of time.

Table 1. Uncertainties Associated With Several Input Fields of a Chemistry-Transport Model at Continental Scalea
FieldDistributionUncertainty
  • a

    The uncertainty of a parameter equation image is measured with a confidence interval that includes 95% of the probability density integral. For a lognormal distribution, this interval is defined by a factor α so that equation image is in the interval [equation image, αp] with a probability of 95% (p is the median value of equation image). All estimates were derived from Hanna et al. [2001].

Top ozone boundary conditionslognormalα = 1.5
Top NOx boundary conditionslognormalα = 3
Lateral ozone boundary conditionslognormalα = 1.5
Lateral NOx boundary conditionslognormalα = 3
Major NOx point emissionslognormalα = 1.5
Wind velocitylognormalα = 1.5
Wind directionnormal±40°
Temperaturenormal±3 K
Vertical diffusion (night)lognormalα = 3
Precipitationslognormalα = 2
Cloud liquid water contentlognormalα = 2
Biogenic emissionslognormalα = 2
Photolysis constantslognormalα = 2

3. Experiment Setup

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

3.1. Model and Input Data

[32] The ozone forecasts and the assimilation experiments are performed in the framework of the air quality modeling system Polyphemus [Mallet et al., 2007] whose version 1.2 includes all algorithms in use in this paper and is freely available at http://cerea.enpc.fr/polyphemus/.

[33] For this study, the Polyphemus model in use is Polair3D [Boutahar et al., 2004] for which an adjoint version is available (for gas-phase chemistry). The configuration of the model may be roughly described as follows: (1) Raw meteorological data are used, comprising ECMWF (European Centre for Medium-Range Weather Forecasts) fields (resolution of 0.36° × 0.36°, 60 vertical levels, time step of 3 hours, 12-hour forecast cycles starting from analyzed fields). (2) Land use coverage is from GLCF (Global Land Cover Facility) land cover map (14 categories, 1 km Lambert). (3) The chemical mechanism is RACM [Stockwell et al., 1997]. (4) Emissions come from the EMEP (Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe) inventory, converted according to Middleton et al. [1990]. (5) Biogenic emissions are computed as proposed by Simpson et al. [1999]. (6) Deposition velocities use the revised parameterization from Zhang et al. [2003]. (7) Vertical diffusion uses the Troen and Mahrt parameterization [Troen and Mahrt, 1986] (in the unstable boundary layer) and the Louis parameterization [Louis, 1979] (elsewhere). (8) Boundary conditions are typical concentrations from the global chemistry-transport model Mozart 2 [Horowitz et al., 2003]. (9) Numerical schemes are: a first-order operator splitting, the sequence being advection–diffusion–chemistry; a direct space-time third-order advection scheme with a Koren flux-limiter; and a second-order Rosenbrock method for diffusion and chemistry [Verwer et al., 2002].

[34] The model domain essentially covers western Europe ([35.0°N, 10.5°W] × [57.5°N, 22.5°E]). Two meshes are considered. The reference mesh has a 0.5° horizontal resolution, and the altitude of the tops of the vertical layers are 50m, 600m, 1200m, 2000m and 3000m. The top layer is high enough to enclose the planetary boundary layer. A time step of 600s is used. The coarse mesh has a 2° horizontal resolution and it includes three levels whose top heights are 50m, 600m and 1200m. The time step is set to 1800s. Both models have 72 chemical species (with RACM) mechanism. Hence the dimension of the state vector is about 1.1 × 106 for the full-resolution model and 3.8 × 104 for the coarse-resolution model.

[35] An analysis of the simulations with the coarse mesh demonstrates that the main physical phenomena (at least for ozone) are reasonably reproduced in the context. The model retains good predictive capabilities (see the comparisons with observations in section 4). The coarse case is used to perform intensive tests. For instance, in the Kalman algorithms that we use, the results depend on random numbers. Thus, they can only be assessed from a large number of trials, which is not tractable with the full resolution model. Nevertheless the full resolution study will be carried out later to verify some key findings in the coarse case. The horizontal domain and its coarse discretization are shown in Figure 1.

image

Figure 1. Horizontal coarse discretization of model domain. The squares show the locations of EMEP monitoring stations, and the disc shows the location of the monitoring station Montandon (east of France).

Download figure to PowerPoint

3.2. Observations

[36] The observations to be assimilated are ozone hourly concentrations. These observations are provided by the EMEP (Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe) network (Figure 1). The network is made of 151 ground stations among which 80–90 stations are actually available during the assimilation periods. They deliver point measurements integrated over 1 hour, but we assume that the observations are instantaneous, as it better fits the algorithms implementation.

[37] The EMEP network includes only regional stations, which ensures a proper comparison between the continental-model outputs and the observations. The model outputs are linearly interpolated (on the horizontal, not on the vertical) at the station locations; the observation operator Hk is therefore linear and its adjoint is easily derived.

[38] It is assumed that the error covariance matrix for ground ozone observations is diagonal, which is reasonable as the measurements from two stations are performed by distinct instruments. The standard deviation of the observation error is set to 10 μg m−3 [Flemming et al., 2003]. Note that the mean of ozone observations is about 70 μg m−3.

3.3. Reference Settings of the Assimilation Algorithms

[39] In this section, we list our default settings of the assimilation algorithms. Sensitivity studies will be performed by alternating algorithm settings in later sections. The experiments consist of two steps: assimilation and prediction. During the assimilation period, say [t0, tN], the observations are assimilated, and during the subsequent prediction period, say [tN+1, tT], the ozone forecasts are the model simulations starting from the analyzed model state at tN. In the reference setting the assimilation period is 1 day from 1 July 2001 at 0100 UT to 2 July at 0000 UT. The prediction period is 1 day from 2 July 2001 at 0100 UT to 3 July at 0000 UT.

[40] The model nonlinearity imposes an upper limit on the time length of the assimilation period (hereafter referred to as assimilation window). Previous perturbations out of this upper limit are ignored. In fact, driven by the winds, the pollutants may be transported out of the modeling domain after a few days. There is also a lower limit during which the impact of the assimilated observations propagates over the whole model domain. In meteorology, the assimilation time interval is about 6 hours extendable to 12 hours. In this study, the assimilation window is set to 1 day.

[41] In the reference setting the state vector includes only ozone concentrations of the first two levels in the model domain. The correlations are supposed to expand gradually, through model simulations, to the complete model domain and to the species other than ozone.

[42] In the Balgovind parameterization of the background error covariance matrix, the standard derivation equation image is set to 20 μg m−3 (derived from usual RMSE for ozone forecast and from Mallet and Sportisse [2006b]), and the characteristic length is set horizontally to 3° (Lh), vertically to 200 m (Lv). The details about the uncertain parameters to be perturbed for EnKF and RRSQRT will be given in section 4.2.2.

[43] The EnKF ensemble number r is chosen to be 30. For a comparable computational cost, in RRSQRT, the number of columns q of the mode matrix is set to 20, the number of columns of the square root Qequation image is set to 10, and the number of columns of the square root Requation image is set to 10. In 4DVar, we employ the L-BFGS optimization solver [Byrd et al., 1995]. In this study, the computational cost of the adjoint model is about 5–7 times larger than that of the forward model, consequently the number of iterations is set to 6 so that the 4DVar cost may be comparable. Note that less iterations make 4DVar suboptimal. However, we checked the evolution of the 4DVar cost function against iteration numbers, and found no considerable decrease in cost function values after 6 iterations (results omitted here).

4. Results

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

4.1. Coarse-Resolution Case

[44] Let s be the vector of model outputs along space and time and o the vector of corresponding observations, the performance of ozone forecasts or assimilations can be assessed by the root mean square error (RMSE) calculated as

  • equation image

where n is the total number of available observations. The RMSE over a certain period with respect to all available observations is called “score.” In this paper, the RMSE will always be given in μg m−3. Hereafter this unit will be omitted for convenience.

[45] The four assimilation algorithms provide better forecast scores compared to the reference simulations (model solutions without assimilation of observations), of course during the assimilation period (hourly forecasts for sequential assimilations), but also in the subsequent prediction steps (see Figure 2). OI has the best overall score, probably because the Balgovind parameterization of model error applies well to this coarse test case. From Figure 3, 4DVar has better scores during the early assimilation period but performs worse during the prediction period. The main reason may be that the underlying 4DVar is “strongly constrained,” that is, there is no model error term in its cost function. Only initial concentrations are controlled. The correction on the initial concentrations tends to be forgotten by the stable chemistry-transport system. EnKF provides the best performances during the late prediction period. It might benefit from its manner of perturbations on uncertain parameters. In OI, it can be considered that the model uncertainties are parameterized by the correlation in model states (which will be the initial conditions for following forecasts). The impact of model uncertainties in these initial conditions gradually fades out, when the model uncertainties in uncertain parameters (listed in section 4.2.2) play an increasingly important role during the prediction period. RRSQRT shows poor performance against EnKF. This is probably due to the projection of mode matrices onto the leading eigenvectors of error covariance matrices. For a comparison of the assimilation performance between EnKF and RRSQRT, we refer to Hanea et al. [2004]. Note that in that paper, colored gaussian noises were added on several uncertain parameters, which is different from our perturbation method. The ozone forecasts at EMEP stations are plotted in Figures 4 and 5. Most forecasts are between the reference simulations and the observations. All forecasts during the predictions period approach to the reference in the end. This shows again the rather low dependency of the short-range predictions on the initial conditions. Caution has to be paid to the conclusions since the assimilation results can still be improved by optimal tuning of algorithm parameters.

image

Figure 2. Scores of ozone concentrations during the assimilation period (day 1) and the prediction period (day 2).

Download figure to PowerPoint

image

Figure 3. Time evolution of the RMSE for the ozone forecasts. The score over 2 days is 27.76 for reference, 19.90 for OI, 23.11 for EnKF, 21.98 for 4DVar, and 24.63 for RRSQRT. The vertical lines delimits the assimilation period from the prediction period.

Download figure to PowerPoint

image

Figure 4. Time evolution of average ozone forecasts over all available stations. The error bar shows the average spread of the EnKF forecast ensemble calculated over these stations.

Download figure to PowerPoint

image

Figure 5. Time evolution of ozone forecasts against available observations over 2 days at EMEP station Montandon.

Download figure to PowerPoint

4.2. Sensitivity Studies for the Coarse Case

[46] The first set of tests is carried out in the coarse case. Modifications of configurations on each component of the data assimilation systems, i.e., model, observation and algorithm, may influence the assimilation performance. The main difficulty to interpret the results is that the error covariance structures B and Pf are unsettled subjects.

[47] As for model component, we examine different state vectors to be controlled, alternative model physical parameterizations, Balgovind parameterization of background error covariance, and various perturbation settings for model error approximations. As for observation component, the error variance ratio between observations and background concentrations is examined. Different observation networks are tested. As for algorithm component, we evaluate the impact of the assimilation time length and EnKF ensemble issues, i.e., ensemble size and randomness. Hopefully some clues can be drawn for error modeling from the results of this sensitivity study. If not mentioned, only one factor is changed in each sensitivity study, and all other algorithm settings remain the same as those in section 3.3.

4.2.1. Ensemble Randomness and Size

[48] An important parameter for EnKF is its ensemble size. The directions of model error are approximated by the samples deviations from the ensemble mean. Recall formula (13), these directions are related to the outcomes of the parameters perturbations. A key question for this approach is how fast the assimilation results converge as the ensemble size increases. The spread of the model error space is determined not only by the ensemble size but also by the definition of the parameters set to be perturbed. The latter is addressed in section 4.2.2.

[49] In this section, we conduct EnKF assimilations with ensembles of sizes 5, 10, 20, 30, 50, 70, 90 and 120. For each ensemble, the randomness of Monte Carlo sampling is accounted for by employing 10 different seeds for the random number generation. This means that 10 ensembles are generated for each ensemble size.

[50] The forecast scores over both the assimilation period and prediction period are shown in Figure 6. Both converge as the ensemble size increases. The influence of the ensemble randomness decreases with ensemble size (see the error bars). The augmentation of sample numbers improves forecast scores, but the improvements are modest probably owing to the fast convergence of the procedure.

image

Figure 6. Forecast scores of EnKF against the ensemble size. The score for reference simulation without assimilation is 29.55 over day 1, and 25.87 over day 2. The curve shows mean scores and the error bar shows the standard derivations over 10 random seed numbers.

Download figure to PowerPoint

[51] The computational cost is proportional to the number of samples in the ensemble. The balance between computational cost and assimilation performance helps the specification of the ensemble size. For realistic model grids, there is usually a constraint of 30 ∼ 100 ensemble samples due to computational considerations.

4.2.2. Uncertain Parameters Setting

[52] Different parameters sets and perturbation magnitudes are listed in Table 2. The uncertain parameters are input data to the model. The perturbation magnitudes are kept reasonable (within the confidence interval at the 95% level), so that no instabilities may be produced owing to the physically unrealistic parameter values. Notice that these parameters are only perturbed in Polair3D which chiefly carries out the numerical time integration. For instance, the perturbation of the temperature has no impact on the deposition velocities which are computed in preprocessing steps. The forecast scores with respect to uncertain parameters settings are shown in Figure 7. The results in the prediction period are slightly sensitive to the different uncertain parameters sets, which is consistent with the finding of Mallet and Sportisse [2006b] that the turbulent closure introduces the highest uncertainty. The results are more sensitive to the uncertain parameters sets than to the perturbation magnitudes. This probably indicates that the dimensionality of the bases of the perturbation parameter space are more important than the lengths of these bases for assimilations.

image

Figure 7. Forecast scores for EnKF against different uncertain parameter definitions. The parameter sets and perturbation magnitudes are defined in Table 2; {Ω0; α0} is the reference setting. The EnKF sample number is chosen to be 30 (white columns) and 90 (dark columns), respectively. The two columns of scores for each case show the forecast scores during the assimilation and prediction periods. The bar values are mean scores, and the error bar shows the standard derivations over 10 random seed numbers.

Download figure to PowerPoint

Table 2. Definition of Uncertain Parametersa
Parameter Nameα0α1α2
  • a

    Let Ω0 be the set of parameter names for the reference setting in section 3.3. Let Ω′ and Ω″ be the two sets of additional parameter names. We denote Ω1 as {Ω, Ω′}, and Ω2 as {Ω, Ω′, Ω″}. The perturbation magnitude is characterized by α defined as in equation (22). The reference magnitudes are listed in α0 column. In α1 and α2 columns, enlarged magnitudes are defined. Note that the distribution of temperature is supposed to be normal, and its magnitude should be interpreted as relative standard derivation.

Ω0
Boundary condition3.3.3.
Deposition velocity1.52.3.
Photolysis rate1.31.52.
Surface emission1.52.3.
Attenuation1.31.52.
Vertical differential coefficient1.31.52.
 
Ω′
Cloud height1.31.52.
Vertical wind1.31.52.
Meridional wind1.31.52.
Zonal wind1.31.52.
Specific humidity1.31.52.
 
Ω″
Pressure1.31.52.
Air density1.31.52.
Meridional differential coefficient1.31.52.
Zonal differential coefficient1.31.52.
Temperature0.0050.010.015

[53] The perturbation magnitudes are spatiotemporally homogeneous in this study: equation imageγ in equation (22) does not depend on spatial coordinates, and temporal correlations are not taken into account. However this hypothesis is not necessarily true. Refined perturbations might improve the forecast performances. Furthermore, additional uncertain parameters may be included for a larger model error spread.

4.2.3. Assimilation Window

[54] This determination of an optimal assimilation window is essentially linked with model nonlinearity, but should be treated separately according to sequential or variational context. In the variational case, the model nonlinearity makes the cost function nonconvex, and thus the optimization may suffer from the presence of local minima. Clearly there are constraints on the assimilation window, and for better performance the assimilation window has to be short [Pires et al., 1996]. In the sequential case, the observations are assimilated spontaneously. There are corrections on the state vector until the end of the assimilation window, which is an advantage for the subsequent prediction steps. Elegant analysis demands in-depth investigations on how the information (from observation) propagates among state components.

[55] We perform brute-force tests. In the sequential case, the prediction period is fixed from 8 July 2001 at 0100 UT to 9 July at 0000 UT, whereas the assimilation window varies from 1 day to 7 days and always ends at 0000 UT 8 July. The algorithm settings are the same as those for the reference case. For EnKF, the random seed is fixed with an ensemble size set to 30 and 90. The forecast scores over the prediction period are compared in Figure 8. We observe a considerable improvement in forecast scores with a window of 2 days against that with 1-day window. The first day of assimilation could be interpreted as an ensemble initialization, since the initial conditions of all members are identical in our implementation. We performed ensemble forecasts starting from identical samples and checked the ensemble spread (results not presented here), and we found out that the spread reached its maximum within 10 hours. This explains why a 1-day assimilation with EnKF may be unsatisfactory. EnKF with longer assimilation windows (more than 2 days) outperforms OI in this experiment. Larger assimilation windows (more than 5 days) tend to be unnecessary long since the corrections are rapidly forgotten by the model.

image

Figure 8. Forecast scores of OI and EnKF (with 30 and 90 members) against the number of assimilation days.

Download figure to PowerPoint

[56] In the variational context, we perform an experiment with the setting as that of the sequential case. The assimilation windows varies from 1 to 4 days, followed by 1 day of prediction. The start date of the prediction period is fixed at 0100 UT 8 July 2001. The forecast scores are shown in Figure 9. The performances over prediction period decrease with longer assimilation periods and approach the score of reference simulation without assimilation. These results clearly indicate the limitation of strongly constrained 4DVar in which only initial conditions are controlled. Model error has to be taken into account to form the weakly constrained 4DVar for better forecast performance.

image

Figure 9. Forecast scores against the number of assimilation days for the two experiments using 4DVar.

Download figure to PowerPoint

4.2.4. Physical Parameterization

[57] In order to assess the robustness of the assimilation, we apply EnKF to modified models which differ in their physical parameterizations. Several alternatives to the reference parameterizations are listed in Table 3, and the corresponding assimilation results are shown in Figure 10. The assimilations are performed using reference EnKF algorithm with 30 samples. The forecast scores are highly sensitive to the chemical mechanism, as in the uncertainty investigations of Mallet and Sportisse [2006b].

image

Figure 10. Forecast scores of EnKF, for modified models over the assimilation and the prediction periods.

Download figure to PowerPoint

Table 3. Physical Parameterization Settings
ParameterizationReferenceAlternative
Deposition velocitiesZhang et al. [2003]Wesely [1989]
Vertical diffusionTroen and Mahrt [1986]Louis [1979]
ChemistryRACMRADM2 [Stockwell et al., 1990]
4.2.5. State Component

[58] It is not straightforward to control all model components, primarily because the correlations among them are unavailable. Investigations are needed to determine the most relevant state vector. In this section, we test the impact when including different vertical levels of ozone concentrations and different chemical species in the state vector. In Figure 11 we show the time evolution of the RMSE when controlling different vertical levels of ozone concentrations. Only including the first model level in the state vector is a fairly limited approach as the vertical transport plays a crucial role in ozone evolution. Consequently it is not surprising that the advantage of assimilating the first two levels over assimilating only ground level is enormous for all assimilation algorithms. However, in OI, the improvement of assimilating all levels over the first two levels is slight. By contrast, in 4DVar the improvement is still considerable during the assimilation period when assimilating all levels. This is probably because OI is a local assimilation algorithm in the sense that observations are assimilated instantaneously, whereas 4DVar searches global optima over the assimilation period that best fit the observations. In EnKF, the improvement is even considerable during the prediction periods when assimilating all levels. This might be due to the difference in error modeling. For OI and 4DVar, the vertical correlations are parameterized by the Balgovind correlation function. For EnKF, the vertical correlation structure is represented by the statistics of an ensemble generated by the perturbation method.

image

Figure 11. Time evolution of the RMSE against different vertical levels of ozone concentration to be controlled.

Download figure to PowerPoint

[59] Because the correlation among different species is a priori unknown, only EnKF is employed to test the impact of assimilating different species. The ensemble size is 30, and the same random seed is used for all experiments. Only the first two levels of the domain are controlled. The species included in the state vector are combinations of O3, NO, and NO2. The results in Figure 12 show modest impact when assimilating different species. It is hard to interpret these results in depth. For further investigations, the interactions among model components have to be quantified, for instance, by relative entropy [Liang and Kleeman, 2005].

image

Figure 12. Forecast scores against different state components over assimilation and prediction periods.

Download figure to PowerPoint

4.2.6. Parameters in Balgovind Correlation Function

[60] Balgovind characteristic lengths Lh and Lv determine the spatial structure of the background error covariances. We perform assimilations (OI and 4DVar) with different lengths listed in Table 4. The corresponding covariance structures vary from small- to large-scale correlations. Other experimental settings are the same as those of section 3.3. The ozone forecast scores are shown in Figures 13 and 14.

image

Figure 13. Forecast scores using OI against different configurations in Table 4.

Download figure to PowerPoint

image

Figure 14. Forecast scores using 4DVar against different configurations in Table 4.

Download figure to PowerPoint

Table 4. Different Configurations for Balgovind Scale Parameters
 Lh (deg)Lv (m)
a0.130
b0.1200
c0.1500
d1.530
Reference1.5200
e1.5500
f1030
g10200
h10500

[61] The assimilation is quite sensitive to the Balgovind characteristic lengths. For OI, the worst scores over the prediction period are those with the smallest vertical-scale parameter (Lv = 30 m). In this case, the vertical correlation is too weak. The worst scores over the assimilation period are those with largest horizontal-scale parameter (Lh = 10°). There might be spurious correlations from distant observations. It seems that the medium-scale correlation in the reference setting is a proper choice. The forecast scores, especially those over the assimilation periods, deteriorate when vertical-scale parameter increases.

4.2.7. Observation Effect

[62] The quantity and the quality of the observations are important factors for assimilation/prediction systems. There might be an optimal relationship between model resolutions and observation availabilities described implicitly by the optimality system [Le Dimet and Shutyaev, 2005; Bocquet, 2005]. Preliminary experiments are designed to address the model-observation relationships.

[63] In a first experiment, we examine the impact of the observation network on ozone forecast performance. The original 151 EMEP stations are catalogued into three partitions: (1) the center stations versus around ones, (2) west stations versus nonwest ones, and (3) randomly selected stations versus the unselected ones. The observations from the center, west and randomly chosen stations are assimilated respectively. We take their counterparts as validation stations. Two assimilation algorithms, i.e., OI and EnKF, are employed with the reference settings. The forecast scores over the assimilation and the prediction periods given different observation networks are listed in Table 5. The corresponding scores without assimilations are listed for comparison in Table 6. The score gains are all positive. There is no clear winner for the two algorithms. OI has better overall performance. EnKF shows less disparities and better forecast performance over the prediction period. This may mainly result from the different correlation structures employed by the two methods (detailed in section 4.3).

Table 5. Forecast Performances for Different Observation Networksa
Observation NetworksDay 1Day 2
Assimilation StationsValidation StationsAssimilation StationsValidation Stations
OIEnKFOIEnKFOIEnKFOIEnKF
  • a

    The numerators are the forecast scores with assimilations, and the denominators are the gains in scores over the corresponding simulations without assimilation. The scores without assimilation are shown in Table 6.

Centerequation imageequation imageequation imageequation imageequation imageequation imageequation imageequation image
Westequation imageequation imageequation imageequation imageequation imageequation imageequation imageequation image
Uniformequation imageequation imageequation imageequation imageequation imageequation imageequation imageequation image
Table 6. Forecast Scores of Reference Simulation Without Assimilation for Different Observation Networks
 CenterAroundWestNonwestUniformRest
Day 129.5429.5523.8933.4029.4829.61
Day 221.8128.6727.2024.7825.8525.88

[64] The inverse of the observation variance can be interpreted as the measurement accuracies. The assimilation is an optimization process with the objective weighted by the relative accuracies between observations and model simulations. In a second experiment, we examine the forecast performance using OI and 4DVar with a range of ratios between observation and background error variances. The observation/background ratios, R/B ratio in short, are shown in Table 7. The results are plotted in Figure 15. The assimilation performance is improved by increasing observation accuracies. However, there are little gains when decreasing the R/B ratio below 0.05. The 4DVar is less sensitive to the R/B ratio, since the background errors are only considered at initial conditions. For OI, when the observations are supposed to be extremely accurate (R/B ratio at 0.01) there are artificial fluctuations at some stations where the observations are not compatible with the chemical state of the model.

image

Figure 15. Forecast scores against different equation image ratios shown in Table 7.

Download figure to PowerPoint

Table 7. Different Ratios Between Observation and Background Error Variancesa
 R/BRB
  • a

    R, Observation error variances; B, background error variances; plus sign means more accuracy.

R++0.0110010000
R+0.051002000
Reference0.25100400
B+1400400
B++104000400

4.3. Model Error Covariance Structure for Coarse Case

[65] The model error covariance structure (B or P) is decisive to the assimilation behavior. In many occasions we resort to them for explanations of our results. The covariance between the error at the station Montandon and the error in all ground cells is shown in Figure 16. The covariance field obtained by the statistics of the EnKF forecast ensemble shows an irregular structure which brings detailed information compared to the isotropic Balgovind parameterization. However, spurious correlations may be produced by the homogeneous perturbations (see equation (22)).

image

Figure 16. Balgovind parameterization. Approximations by EnKF forecast ensemble. The covariance between the error at the station Montandon and the error in all ground cells at 1300 UT, 2 July 2001.

Download figure to PowerPoint

[66] In Figure 17, we show the assimilation/prediction performances at several randomly chosen stations. Except for three stations (St. Koloman, Heidenreichstein, and Bottesford), the ensemble predictions fail in the sense that the observations (with their standard deviations set to 10 μg m−3) are not within the range of model errors represented by the EnKF ensemble spread. The ensemble spread during the assimilation period is dramatically decreased after assimilating observations. In our EnKF implementation, no additional inflations are conducted on the state error covariance Pkf (see equation (14)) as in work by Constantinescu et al. [2007b].

image

Figure 17. EnKF assimilation/prediction performances at nine stations. The titles read station names and their coordinates in model grid indices. The horizontal axis shows the accumulated available observation numbers along time. Along vertical axis, the ozone concentrations are plotted in μg m−3. The diamond points show the observations, the short lines plot assimilated concentrations, and the lines with error bars are the means of the forecast ensemble. The error bars are the relative standard derivations calculated with the forecast ensemble.

Download figure to PowerPoint

[67] The ensemble relative standard derivations averaged over time are shown in Figure 18. As expected, the ensemble spread decreases when the assimilation is applied. One may notice high uncertainties around the coasts, although is not as clear as in work by Mallet and Sportisse [2006b] where the uncertainties were generated by statistics over several months. Our ensemble spread describes an accidental uncertainty configuration that depends not only on the chemistry-transport processes, for example, turbulence, but also on the meteorological scenarios.

image

Figure 18. Maps of the averaged relative standard derivation during both assimilation and prediction periods (a) calculated by Monte Carlo forest ensemble and (b) calculated by EnKF forecast ensemble. The ensemble spread is decreased by assimilating observations.

Download figure to PowerPoint

4.4. Cycling Forecast for Coarse Case

[68] It is expected that the findings in the previous sections are independent of the assimilation dates. To this end, we perform assimilation/prediction processes consecutively for 1 week. The length of the assim./predict. periods is chosen to be 1 day. The first assimilation period is from 1 July 2001 at 0100 UT to 2 July at 0000 UT, followed by the prediction from 2 July at 0100 UT to 3 July at 0000 UT. The assimilation period of the second assim./predict. process is the same as the prediction period of the first assim./predict. process. The second prediction period is from 3 July at 0100 UT to 4 July at 0000 UT. The cycling continues until the final assim./predict. process. The last prediction period is from 8 July at 0100 UT to 9 July at 0000 UT. The initial conditions for the subsequent assimilation periods are the hourly forecasts based on previously controlled states after assimilations. The performance of OI, EnKF and 4DVar forecasts over the prediction periods are shown in Figure 19.

image

Figure 19. The 1-day forecast performances based on model simulations with/without assimilations in the context of cycling assimilation/predictions.

Download figure to PowerPoint

[69] The improvement of forecast performance with assimilation is obvious. The Balgovind parameters for OI and 4DVar are constant, so are the perturbation magnitudes for the EnKF samples. The lesser performance of 4DVar is probably due to the absence of model error during assimilation. EnKF surpasses OI in forecasts longer than 12 hours. The approximation of model error by refined perturbations is promising for longer forecasts.

4.5. Full-Resolution Case

[70] The previous results are obtained with coarse models at a resolution of 2° × 2° × 1800 s with 3 vertical levels. In this section, the reference resolution at 0.5° × 0.5° × 600 s with 5 levels is employed (detailed in section 3.1). There are 33 cells along latitude and 65 cells along longitude.

[71] All available assimilation algorithms are tested for this full-resolution setting. The assimilation experiments are similar to those in section 3.3, but conducted at three different dates. The assimilations are performed at 1, 3, and 7 July respectively, and the corresponding prediction dates are 2, 4, and 8 July. The Balgovind parameters for B are the same as those in coarse-resolution case for OI and 4DVar. The sample number of EnKF ensemble is set to 30. For RRSQRT, the number of columns in the mode matrix is 30, the number of column in Qequation image is 20, and the number of columns in Requation image is 10.

[72] In general, the magnitude and structure of model error vary with respect to the model resolution. The assimilation results are shown in Figures 20 and 21. The assimilations improve the forecast scores. The poor performance of EnKF forecasts at 4 July might be the consequence of excessive perturbations. Comparing with the forecast scores at 2 July for the coarse case in Figure 2, one can find that the Balgovind parameterization of model error are stable (OI and 4DVar results), whereas the perturbation methods are sensitive to the changing of model resolutions (EnKF and RRSQRT).

image

Figure 20. Forecast scores of ozone concentrations during prediction dates.

Download figure to PowerPoint

image

Figure 21. Time evolution of ozone forecasts against available observations at Montandon station.

Download figure to PowerPoint

[73] Better results can be obtained via tuning the algorithm parameters in perturbation methods for the full-resolution case. The sensitivity of assimilation performance to the ensemble size and the assimilation window are presented in Figures 22 and 23. The aim of this simple sensitivity study is not to find the optimal algorithm parameters for the full-resolution model, but to verify the main findings in the coarse case. For instance, the forecast scores are improved by augmenting ensemble samples. We observe spin-up process in the first day of EnKF assimilation, and we have satisfactory results with assimilation window set to 3 days.

image

Figure 22. Forecast scores at 2 July against EnKF ensemble size.

Download figure to PowerPoint

image

Figure 23. EnKF forecast scores at 8 July against the number of assimilation days.

Download figure to PowerPoint

5. Conclusion

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

[74] In order to design suitable assimilation algorithms for short-range ozone forecasts in realistic applications, four algorithms, namely optimal interpolation, reduced-rank square root Kalman filter, ensemble Kalman filter, and four-dimensional variational assimilation, have been implemented and compared in the same benchmark settings.

[75] Although the forecasts beyond 1 day tend to approach the model simulations without assimilation (because of the low dependency of model simulations on initial conditions), it has been shown that the assimilation algorithms significantly improve the ozone forecasts. Data assimilation would be an indispensable part of practical ozone forecast systems as in NWP.

[76] The comparison results have illustrated the limitations and potentials of different assimilation algorithms. OI provides overall better performances. It benefits from the Balgovind parameterization of model uncertainties during the assimilation periods. In EnKF, the model uncertainties were approximated by the statistics of the ensemble generated by perturbing uncertain model parameters. This perturbation method shows good potential to alleviate the constraint of the low dependency on initial conditions in ozone forecasts. EnKF produces best forecasts during the end of prediction periods. The strongly constrained 4DVar does a moderate job, because uncertainties are taken into account only at the initial date of the assimilation. Further studies are needed, for example, the estimation of ozone concentrations jointly with the emission rates [Elbern et al., 2007]. We remark that there are no final conclusions because of the unsettled formulation of model error. We paid less attention to RRSQRT, since, in our implementation, it is quite similar to EnKF.

[77] We have also conducted sensitivity analysis on algorithm parameters, for example, ensemble randomness and size, assimilation window, perturbation fields, and diverse model settings. Further refinements of the assimilation algorithms can thus be tested by tuning these algorithm parameters for better forecast performances. This is especially necessary for the case of the full-resolution model.

[78] It is the complexity of the chemistry-transport phenomena and the limited observations that make it difficult for the modeling and assimilation. The approximations of the complex phenomena make CTMs imperfect, and uncertainties arise. If a deterministic model is employed, or if the uncertainties are not realistic, the forecasts of the stable system approach to the reference simulations without assimilation. Therefore all assimilation algorithms have to be adaptive, in one way or another, so that the uncertainties should be better represented.

[79] For the design of better assimilation algorithms, serious investigations on error modeling are needed. The ensemble could be obtained from more uncertain sources, for example, numerical approximations and subgrid physical parameterizations. The statistics of the enlarged ensemble are expected to be more accurate approximations of the model error.

[80] Spatially heterogeneous perturbations, which smooth out the spurious correlations of long distance, would certainly produce more realistic model errors. Another concern is that the lognormal perturbations might result in non-Gaussian model errors. In this case, assimilation methods deviating from normal may have to be accounted for. Other methods rely on hybridizations between sequential and variational methods which essentially use the assimilation results from both methods for error modeling. The comparison of ensemble forecast techniques [Mallet and Sportisse, 2006a] and assimilation algorithms would also be an interesting topic for future studies.

References

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. Introduction
  4. 2. Assimilation Algorithms
  5. 3. Experiment Setup
  6. 4. Results
  7. 5. Conclusion
  8. References
  9. Supporting Information
FilenameFormatSizeDescription
jgrd14800-sup-0001-t01.txtplain text document1KTab-delimited Table 1.
jgrd14800-sup-0002-t02.txtplain text document1KTab-delimited Table 2.
jgrd14800-sup-0003-t03.txtplain text document0KTab-delimited Table 3.
jgrd14800-sup-0004-t04.txtplain text document0KTab-delimited Table 4.
jgrd14800-sup-0005-t05.txtplain text document1KTab-delimited Table 5.
jgrd14800-sup-0006-t06.txtplain text document0KTab-delimited Table 6.
jgrd14800-sup-0007-t07.txtplain text document0KTab-delimited Table 7.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.