Journal of Geophysical Research: Atmospheres

Inverse modeling of NOx emissions at regional scale over northern France: Preliminary investigation of the second-order sensitivity

Authors

  • Denis Quélo,

    1. Centre d'Enseignement et de Recherche en Environnement Atmosphérique, École Nationale des Ponts et Chaussées–Électricité de France Recherche et Développement, Champs-sur-Marne, France
    2. CLIME, Institut National de Recherche en Informatique et en Automatique–École Nationale des Ponts et Chaussées, Champs-sur-Marne, France
    Search for more papers by this author
  • Vivien Mallet,

    1. Centre d'Enseignement et de Recherche en Environnement Atmosphérique, École Nationale des Ponts et Chaussées–Électricité de France Recherche et Développement, Champs-sur-Marne, France
    2. CLIME, Institut National de Recherche en Informatique et en Automatique–École Nationale des Ponts et Chaussées, Champs-sur-Marne, France
    Search for more papers by this author
  • Bruno Sportisse

    1. Centre d'Enseignement et de Recherche en Environnement Atmosphérique, École Nationale des Ponts et Chaussées–Électricité de France Recherche et Développement, Champs-sur-Marne, France
    2. CLIME, Institut National de Recherche en Informatique et en Automatique–École Nationale des Ponts et Chaussées, Champs-sur-Marne, France
    Search for more papers by this author

Abstract

[1] The purpose of this article is to perform the inverse modeling of emissions at regional scale for photochemical applications. The case study is the region of Lille in northern France for simulations in May 1998. The chemistry-transport model, Polair3D, has been validated with 1 year of model-to-observation comparisons over Lille. Polair3D has an adjoint mode, which enables inverse modeling with a variational approach. A sensitivity analysis has been performed so as to select the emission parameters to be modified in order to improve ozone forecasts. It has been shown that inverse modeling of the time distribution of nitrogen oxide emissions leads to satisfactory improvements even after the learning period. A key issue is the robustness of the inverted emissions with respect to uncertain parameters. A brute force second-order sensitivity analysis of the optimized emissions has been performed with respect to other parameters and has proven that the optimized time distribution of NOx emissions is robust.

1. Introduction

[2] Emission inventories used in air pollution modeling admit a large range of uncertainties [e.g., Hanna et al., 2001]: (1) The spatial distribution of emissions is not always well known and may be highly heterogeneous; (2) The time distribution of emissions is strongly related to variable parameters (such as traffic conditions, biogenic activity); (3) The chemical distribution is also uncertain: The relations between the chemical species given by the emission inventories, the “real chemical species” and the “model species” (the species described in chemistry-transport models) are often questionable.

[3] A growing field of interest is therefore the inverse modeling of emissions (more precisely, of parameters related to the emissions) on the basis of a combined use of model outputs and observational data (provided by monitoring networks). These topics belong to the larger domain of data assimilation.

[4] Moreover there are further reasons to support these approaches. One may be interested in estimating the emissions of a given sector or of a given country in order to check the fulfillment of a regulatory agreement. This is typically the case for the gases implied in the Greenhouse effect at global scale or for the pollutants regulated by the Long Range Transport of Air Pollution Protocol (LRTAP) over Europe.

[5] An increasing number of works has been devoted to these topics in recent years. At global scale, passive tracers or weakly reactive species, such as CO or CH4, have already been studied. One can refer, for instance, to the work of Kaminski [1998], Bergamaschi et al. [2000], and Bousquet et al. [1999]. In the case of linear tracers, such as radionuclides, many methods have already been proposed, following the Chernobyl accident and the ETEX campaign [Hourdin and Issartel, 2000].

[6] The situation is quite different for reactive chemistry-transport models, as the dependence of concentrations on emissions is nonlinear. Moreover the models are characterized by high-dimensional systems (whose dimension is given by the number of chemical species in the chemical mechanism). One can refer to the work of Elbern et al. [2000] for academic studies and to Elbern and Schmidt [2001], Mendoza-Dominguez and Russell [2001], and van Loon et al. [2000] for examples at continental scales. A few works have been devoted to the inverse modeling at regional scales: We can refer, for instance, to Chang et al. [1997] for the inverse modeling of biogenic isoprene emissions over Atlanta with the use of a Kalman filter. Another work is that of Mendoza-Dominguez and Russell [2001] with a linearized method applied to Atlanta, as well.

[7] The purpose of this paper is to perform inverse modeling of emissions for air pollution applications at regional scale. A key issue, which is not often investigated, is the robustness of the inverted parameters: How to be sure that the results are not only “fits” of the model outputs to the data? What is the quality of the new emission parameters? How sensitive are these parameters with respect to other uncertain parameters (supposed to be known)?

[8] We have investigated these issues with an application to northern France, over the region of Lille. A comprehensive 3D chemistry-transport model, Polair3D [Boutahar et al., 2004], has been validated by comparisons to measured data over 1 year (1998 in the study by Quélo [2004]). A sensitivity analysis has also been performed in order to choose the relevant parameters for inverse modeling. The choice was made to perform inverse modeling of the time distribution of NOx emissions. The emissions of NOx are the emissions that have, at first order, the greatest impact on ozone in this case. Their time distribution is not well known (contrary to the spatial distribution, an exhaustive emission inventory having been built in the framework of the French Research Program PREDIT, a program for research, experimentation and innovation in land transport).

[9] Inverse modeling has been performed through variational methods, Polair3D having an adjoint mode. The learning database is composed of data from the week of 11–15 May. The use of the modified emission inventory improves the simulated results of the two following weeks. Moreover the robustness of the inversion has been investigated.

[10] This paper is organized as follows. The second section is devoted to the general presentation of the case study. In the third section, some preliminary tests are made in order to assess the sensitivity of model outputs with respect to emission data. The cost function, that measures the discrepancy between observations and model outputs, is also studied. In the fourth section, the inverse modeling approach is presented and twin experiments (on the basis of numerical data) are performed in order to validate the numerical models. The impact of uncertainties (for instance, model errors) is investigated. In the fifth section, the real case is studied with the inversion of a time distribution for NOx emissions. A posteriori verifications and robustness tests are also performed in order to assess the quality of the optimized parameters.

2. Case Study

[11] Lille is part of a dense urban area in northern France that includes many intermediate cities. Therefore the main part of pollution comes from local anthropogenic activities. Pollution may also result from nonlocal sources since Lille is sometimes located in the plume of highly polluted areas (Paris, Ruhr, London). In order to take into account this plume, a simulation at European scale is first performed and the simulation domain over Lille is nested in it.

2.1. Brief Overview of the Chemistry-Transport Model: Polair3D

[12] Polair3D [Boutahar et al., 2004] is a comprehensive 3D Eulerian chemistry-transport model developed at CEREA (laboratory at École Nationale des Ponts et Chaussées and the Research and Development Division of Électricité de France). It is one part of the Polyphemus modeling system [Mallet et al., 2005] (also developed at CEREA and available under the GNU General Public License at http://www.enpc.fr/cerea/polyphemus/), notably devoted to impact studies, forecasts and data assimilation for the atmospheric dispersion of chemical species and radionuclides. Within this system, Polair3D is mainly responsible for the time integration of the chemistry-transport equation. It includes several chemical mechanisms, including RACM [Stockwell et al., 1997], which was chosen for this study. The other components of Polyphemus provide the input fields to Polair3D (meteorological fields, deposition velocities, etc.), computed using relevant physical parameterizations.

[13] One of these components is the library AtmoData [Mallet and Sportisse, 2005] which gathers the physical parameterizations. Thanks to this library and the programs available with it (themselves part of Polyphemus), the following fields were computed for this study: (1) the meteorological fields extracted from ECMWF data; (2) the vertical diffusion coefficients computed with Louis' parameterization [Louis, 1979]; (3) the cloud attenuation computed in the same way as given by Chang et al. [1987] and Madronich [1987]; (4) the deposition velocities with Wesely's parameterization [Wesely, 1989]; (5) the anthropogenic emissions from the EMEP inventory, following Middleton et al. [1990], at European scale; at regional scale, the emissions are generated as explained in section 2.3; (6) the biogenic emissions computed on the basis of Simpson et al. [1999]; (7) the boundary conditions (for the European simulation) extracted from a MOZART 2 [Horowitz et al., 2003] simulation over a typical year.

[14] All these fields, that appear in the chemistry-transport equation, are then available to Polair3D which in turn integrates the equation in time with efficient numerical schemes. It uses a first-order splitting method in which the advection is integrated first, then the diffusion and finally the chemistry. The advection scheme is a third-order direct space-time scheme with a Koren flux limiter [Verwer et al., 1998]. The diffusion and the chemistry are both integrated with a second-order Rosenbrock method which is suitable for stiff problems and whose implicitness enables the use of large time steps (in this study, 600 s at both European scale and regional scale). As for chemistry, to enforce the computational efficiency, the sparsity of the Jacobian matrix involved in the Rosenbrock method is taken into account [Sandu et al., 1996].

[15] The simulations performed with Polair3D have proven to be reliable. At European scale, the model has been validated notably over the year 2001. Details about this validation are given by Mallet and Sportisse [2004]. For instance, the comparison with measurements from May to August 2001 of ozone peaks at 242 stations (27,000 measurements) gives a root mean square of 22.7 μg m−3 and a correlation of 72.7%. At regional scale, the results are also satisfactory as shown in the following sections.

[16] One should note that the modeling system Polyphemus provides data sets and parameterizations consistent with the current knowledge in physics. Numerical adjustments that are not supported by physics were discarded even if they could lead to better results. This is the only means to ensure that the inverse modeling of emissions makes sense, so that the retrieved emissions have a chance to be closer to the real emissions.

[17] The last point to be emphasized is the availability of a tangent linear mode and an adjoint mode of Polair3D with respect to virtually any input parameter, including the emissions. This feature is provided by automatic differentiation [Mallet and Sportisse, 2004].

2.2. Domain

[18] The domain covers the region of greater Lille, over a 21 km × 24 km domain. The center of the domain is the city of Lille. It is discretized with a 1 km × 1 km horizontal grid and 9 vertical levels ranging from the ground to 3000 m in order to include the atmospheric boundary layer. The height of the first layer is 30 m and the thickness of the other layers ranges from 120 m to 510 m.

2.3. Emissions Over Lille

[19] The anthropogenic emissions come from several databases: (1) The EMEP inventory for VOCs, NOx, SO2 and CO is the result of a “top-down” approach and is available in annual totals over a 50 km × 50 km grid for each activity sector (Selected Nomenclature for Air Pollution (SNAP)). (2) Traffic emissions are delivered by the Centre d'Étude Technique de l'Équipement (CETE institute) over the Lille regional administrative area on the basis of the road locations, the distribution of vehicle categories and the emission factors from the standard COPERT III European methodology. (3) The total annual emissions of major industrial sources are collected by the Direction Régionale de l'Industrie, de la Recherche et de l'Environnement (DRIRE institute): Within the Lille area, 20 point sources are taken into account including production processes and waste treatment.

[20] Emissions from the EMEP inventory are replaced with the detailed data provided by the two last databases whenever possible. A large part of the NOx emissions come from the traffic (about 60%) and are therefore well described by our inventory. This is highly valuable according to the sensitivity of photochemical concentrations to the NOx emissions.

[21] The EMEP annual totals are first mapped to the simulation grid. Because of the coarse resolution of the EMEP inventory, the spatial distribution is mainly determined by the land use coverage in order to associate the major part of the emissions with urban areas. Thus, within an EMEP cell, the emissions are distributed so as to give an appropriate weight to the urban areas, the forests and the other areas (respective weights of 12, 1.6, and 1). The two other databases are already accurately distributed and their horizontal grid matches the simulation domain.

[22] The emissions are then vertically distributed to take into account the stack heights and the elevation due to the high temperature at release time.

[23] Annual EMEP emissions and point sources are distributed monthly, weekly and hourly according to coefficients provided by GENEMIS (Eurotrac-2 subproject, http://www.gsf.de/eurotrac/) for each emission sector. The time distribution of road traffic emissions is computed on the basis of daily and hourly coefficients derived from traffic activity measurements.

[24] It is assumed that the NOx emissions are composed of NO (10%) and NO2 (90%). The NMVOC emissions are computed following Middleton et al. [1990].

2.4. Monitoring Network

[25] The locations of the stations of the monitoring network AREMA are plotted in Figure 1 for ozone and in Figure 2 for NOx (one measurement of NO and NO2 per station). The network includes urban and suburban stations and delivers hourly measurements.

Figure 1.

Monitoring network (red circles) and simulated daily concentrations for ozone over Lille, 11 May 1998 (μg m−3).

Figure 2.

Monitoring network (red circles) and simulated daily concentrations for NO over Lille, 11 May 1998 (μg m−3).

2.5. Validation Over Lille

[26] Polair3D has been used in order to simulate air quality over Lille for the year 1998. The boundary conditions have been provided by continental runs of Polair3D. We refer the reader to Quélo [2004] for a more detailed description of the validation.

[27] The model outputs have been compared to measured data provided by the local monitoring network, AREMA. Output concentrations have been interpolated to the locations of the monitoring stations. Statistical measures (root mean square, bias, correlation) have been computed with hourly data for three species: O3, NO2 and NO. Polair3D has shown a satisfactory agreement between simulated concentrations and observations [see Quélo, 2004]. In particular, the correlation for NO2 is above 47% at all monitoring stations except one. This indicates that Polair3D reproduces well the spatiotemporal variability of NO2 concentrations.

2.6. Setup of the Inverse Modeling Case

[28] The setup of the modeling case is the same as for the forward simulations. The full simulation system has been used, without any limitations in the physics or in the numerical schemes.

[29] The study is focused on the three weeks after 11 May. The average wind field at the ground is plotted in Figure 3 for 1 day (11 May). For this day, the average wind velocity is of the magnitude 2 m s−1, which corresponds to a residence time of 3 hours for the pollutants in the simulation domain. This situation is quite representative for this month.

Figure 3.

Average surface emissions for NO in μg m−2 s−1 and average wind at ground in m s−1, 11 May 1998.

3. Some Preliminary Tests: First-Order Sensitivity Analysis

[30] In an inverse-modeling experiment, the first step is to choose the parameters to be inverted. First-order sensitivity analyses determine the parameters with large enough sensitivities.

3.1. Sensitivity Analysis of the Cost Function

[31] The purpose of this section is to investigate the sensitivity of a cost function, that describes the discrepancy between model outputs and observational data, with respect to input parameters for Polair3D. Similar studies have already been performed but they have not been devoted to cost functions (for instance, Segers [2002] over England and Menut [2003] over Paris). This is a key step in order to assess the feasibility of inverse modeling.

3.1.1. A Few Notations

[32] In the following, we consider that the model outputs of Polair3D are of the form:

equation image

c is a vector of space- and time-distributed chemical concentrations and is called the “state vector.” k is the vector of all the input parameters to the chemistry-transport model (see below for examples).

[33] The observations may be compared to c thanks to an observation operator, which is usually written as H. In our case (ground observations), H is a projection matrix, which maps the vector c to the values of several chemical concentrations at the monitoring stations (at given spatial locations and given dates). Hc is therefore the vector of observations deduced from the state c: (Hc)i is the ith observation, where i is an index labelling time, space and chemical species. obsi represents the measured concentration for species O3, NO and NO2 (given by the monitoring network AREMA).

[34] The cost function is defined in order to estimate the discrepancy between the observations (obs) and the numerical results provided by the model (Hf(k)). It is usually written in the following form:

equation image

where R is the observational error covariance matrix.

[35] In order to perform a sensitivity analysis, the input parameters k is multiplied by a scalar numerical parameter α. Typically, α ranges from 0.5 to 1.5 in order to model perturbations of magnitude 50% for k. The reference case is given by α = 1. The cost function is then a function of α with given values of the input parameters k (defined in the reference case):

equation image

Hereafter, we assume that the observations are uncorrelated and that the observation error variances are the same ones for all observations (O3, NO and NO2).

[36] The relative standard deviation for observational errors is usually estimated to be twice larger for NO2 than for O3 (see Table 1). In our case, the average concentration of O3 is twice larger than the average concentration of NO2. The absolute standard deviations for observational errors are therefore roughly the same ones (the absolute standard deviation is the product of the relative standard deviation by the mean value).

Table 1. Standard Deviation of Observational Errors Related to the Concentration Valuea
ConcentrationStandard deviation
O3NONO2
  • a

    Concentrations and standard deviations are given in ppb. Data from Blond [2002].

103.13.46.8
303.13.46.8
503.63.67.2
704.24.18.2
904.94.79.4
1005.35.010.0
1507.66.913.8
20010.09.018.0
25012.511.222.4

[37] The situation is different for NO. Roughly speaking, in our case, the average concentration of NO is six times less than the average concentration of O3. The key point is, however, the high representativity error associated with NO that is probably much higher than for O3. By taking R = I, we have implicitly set the ratio between the two resulting observation errors to 6. This test is then only a “numerical test” with default values (because of the lack of knowledge for these parameters).

[38] In practice, R = I, the identity matrix. The cost function is then a spatial and time average, over the monitoring network and over 1 day (for this section), respectively.

[39] The evolution of J with respect to the perturbation parameter α has been computed for several input parameters k (Figure 4): (1) the lateral boundary conditions provided by the continental simulation for NO, NO2, O3 and the remaining species; (2) the vertical eddy coefficient Kz; (3) the kinetic rate of the reaction O3 + NO → NO2 (in order to quantify the segregation effects on chemical kinetics); (4) the emission fluxes for the primary pollutants NO, NO2, CO, SO2 and VOC; (5) the dry deposition velocities for NO2, O3 and the remaining species; (6) the attenuation coefficient for photolysis (parameterizing the effects of clouds).

Figure 4.

Dependence of the cost function with respect to several model parameters: (a) boundary conditions, (b) Kz, (c) kinetic rate of O3 + NO → NO2, (d) emissions, (e) dry deposition velocities, and (f) cloud attenuation.

3.1.2. Results

[40] The simulations have been performed over 1 day (11 May). The results depend on the chosen day but are representative of the typical sensitivity levels (results not reported here). In the following, the sensitivity is plotted with respect to a constant variation of α ranging from 0.5 to 1.5, which may not correspond to the actual uncertainties for the whole parameters. Even if the magnitude of these uncertainties may be more or less known (see [Hanna et al., 2001] for instance), we have chosen to quantify the impact of similar perturbations in the inputs. Notice that the scale of each figure has been adapted to the values of the cost function.

[41] As ozone is a regional/continental pollutant, it is logical to get a large impact of ozone boundary conditions. On the other hand, the impact of NOx boundary conditions is much weaker (Figure 4a).

[42] The dependence on Kz at the first level (30 meters) or strictly above are indicated in Figure 4b. As expected, the sensitivity with Kz is higher at 30 m than above.

[43] The segregation effect has been parameterized by multiplying the kinetic rate of the formation of NO2 from O3 and NO. The impact is plotted in Figure 4c. Notice that the real value is highly uncertain (the segregation effect is usually not described by comprehensive 3D chemistry-transport models).

[44] The sensitivity with respect to emissions is illustrated in Figure 4d. A key result is the strong impact of NO emissions.

[45] The dry deposition velocity of O3 is also a key parameter as compared to the other deposition velocities (Figure 4e). The parameterization for cloud attenuation of photolysis is also not well known, because of the difficult diagnosis of clouds; hence the sensitivity plotted in Figure 4f (that has a quite low value as compared to other ones) may be underestimated.

[46] To summarize, these preliminary results emphasize the impact of the boundary conditions for ozone and the impact of NOx emissions.

[47] Notice that the cost function J does not systematically admit a local minimum in the range [0.5,1.5] for α.

3.2. Time and Space Impact of NOx Emissions

[48] A key issue for inverse modeling is to reduce the dimension of the control space (that is to say the number of degrees of freedom to be optimized). The emission data is given by 3D fields (two dimensions for surface emissions and one dimension for time) for every emitted species, which gives a very large number of parameters. According to the previous tests, the emission data that have the greatest impact are the NOx emissions. Hence the control space should be reduced by first discarding all other emitted species.

[49] To further reduce the control space, we now investigate the space and time impact of given NOx emissions: (1) For the spatial impact, the NOx emissions of the grid cell (15,10) (arbitrarily chosen) are perturbed by +30% at each time step. The differences in the daily averages are then computed for species O3, NO and NO2 (Figure 5). (2) For the time impact, a perturbation of +30% is applied at 0300 UT to NOx emissions in all grid cells. We then compute the time evolution (on an hourly basis) of the differences in the spatial averages.

Figure 5.

Map of the difference in averaged concentrations due to a perturbation in NOx emission in a given cell for (a) NO, (b) NO2, and (c) O3.

[50] The impact of a perturbation in NOx emissions is highly local in space. The largest difference for the three output species (NO, NO2, O3) is located in the grid cell where the emission occurs. In the vicinity of this cell, the difference is reduced by a factor of 20 for NO and by a factor of 6 for NO2 or O3. A few kilometers further, the emissions of NOx have no more impact.

[51] Figure 6 illustrates how long a perturbation in NOx emissions has an impact on the monitored concentrations. After 3 hours, the impact may be neglected. This means that an observation may give some information for emissions in the 3 previous hours. This may be partially related to the residence time in the domain. Notice that these results cannot be taken as a general result and that they are specific for this case.

Figure 6.

Time evolution of the difference in averaged concentrations for O3, NO, and NO2 due to a perturbation in NOx emissions at 0300 UT.

4. Twin Experiments for Inverse Modeling of Emissions

[52] From the previous section, it appears that NOx emissions have a strong impact and are suited for inverse modeling experiments. In this section, further selection of the parameters to be inverted is based on the uncertainty associated with NOx emission parameters.

[53] Twin experiments enable to validate the data assimilation system and to draw preliminary conclusions related to the consequences of observation errors and model errors.

4.1. Some Notations

4.1.1. Control Variables

[54] An accurate emission inventory has been made over Lille, in the framework of the French Program PREDIT (program of research, experimentation and innovation in land transport) in order to evaluate the health impact of emissions (http://www.certu.fr/doc/env/echange/predit/predit.htm). The spatial distribution of emissions is therefore assumed to be fairly accurate. The situation is quite different for the time distribution, which is given by monthly, daily and hourly coefficients. These coefficients are derived from average situations.

[55] We have thus chosen, as control parameters, the time distribution of the most sensitive emitted species, namely NOx. The emissions are parameterized by

equation image

where equation imageNOx(x, t) are the emissions given by the emission inventory and α(t) are hourly coefficients applied to emissions over the whole domain. Therefore inverse modeling of α(t) for 1 day (24 coefficients) is performed. In the reference case, α(t) = 1. The focus has been put on weekdays (that have different emissions as compared to weekends).

[56] The choice of these control parameters is ruled by (at least) two criteria: (1) The impact on the cost function has to be large enough (see section 3). (2) They are supposed to be valuable for other days. Notice that the time distributions for the weekdays are similar (Figure 7). The purpose is then to get an improved time distribution of the representative NOx emissions for a weekday.

Figure 7.

Time distribution for NOx emissions over Lille from 10 May (Sunday) to 17 May (Sunday).

4.1.2. Choice of the Cost Function

[57] Inverse modeling requires the specification of error statistics for three kinds of data-model outputs, measurements and control parameters (for instance, the background terms). It is usually assumed that the model is perfect and we follow the same assumption.

[58] The estimation of the emission parameters may be an ill-posed problem, which may require the use of penalty terms in the cost function (for instance, through a Tikhonov's regularization). In our case, the number of observational data is large enough. In order to avoid a sensitivity with respect to doubtful background terms, the cost function only describes model-to-data discrepancies. This approach is specific to our case.

[59] Moreover, the observational error is assumed to be constant over all the monitoring stations.

[60] On the basis of these assumptions, the cost function is

equation image

where i labels the time, space and chemical “position” of the observations. The dependence of the model output is only written with respect to NOx emissions. The model is taken as a strong constraint (no model error), a constant weight is given to all observations and no background term is included.

[61] The objective is then to minimize J(α) with respect to α.

4.1.3. Numerics and CPU Performance

[62] This minimization problem has been solved by the iterative algorithm BFGS [Byrd et al., 1995], which belongs to the family of gradient algorithms. This requires to have at our disposal the gradient ∇αJ.

[63] As previously mentioned, Polair3D has been built in order to have an adjoint mode easily available through automatic differentiation [Mallet and Sportisse, 2004]. Polair3D is written in Fortran 77 and may therefore be automatically differentiated by O∂yssée (developed at Inria; Faure and Papegay [1998]). To reduce the computational costs, only the differentiated LU factorization and solver for the chemistry are replaced by hand. The main constraint is to make the appropriate calls to the differentiated code and to check the validity of the adjoint model with finite differences comparisons (the so-called Taylor's tests).

[64] The ratio of the CPU time needed for the adjoint computation (21 minutes per day with a 3 GHz Pentium IV processor) to the CPU time needed for the forward computation (about 3 minutes) is approximatively 7. This ratio is not optimal but this is not an issue in this case.

[65] Each iteration of the procedure needs many evaluations of the cost function and of the gradient. For instance, 50 iterations of BFGS require 20 hours of CPU time.

[66] No stopping criterion was used. As the application was not in an operational context, the iterative algorithm was stopped after convergence. Notice that the convergence is mainly reached during the first steps of the process and that it is possible to limit the number of iterations without lowering the accuracy of the results (see below).

4.2. Twin Experiments Without Perturbations

[67] The first purpose is to validate the numerical algorithms (gradient computation, minimization) on the basis of twin experiments without perturbations. It consists in using numerical data as observational data. These numerical observations are generated on the basis of “true” values for the control parameters: αt (t stands for true). These values are retrieved from a first guess in the minimizing algorithm αb (b stands for background).

[68] In practice, αit = 1 and we have chosen to have 30% of underestimation for the first guess: αib = 0.7.

[69] The evolution of the cost function with respect to the iterations of BFGS is plotted in Figure 8. The true parameters are recovered after optimization up to round-off errors (see, for instance, Figure 9 for the parameter α8).

Figure 8.

Evolution of the cost function with respect to the iterations of BFGS.

Figure 9.

Error for α8 as a function of the iterations of BFGS.

4.3. Twin Experiments With Perturbations

[70] The former experiment has been done without any perturbations and has checked the validity of the numerical tools. Before applying the approach to a real case it is however necessary to evaluate the ability of the system to deal with perturbations. The underlying issue is to estimate the quality of the results.

[71] Two experiments have been performed: (1) The first experiment is related to observational errors: The numerical observations are therefore perturbed. (2) The second experiment is related to model errors: Some input parameters (see below for the details) are perturbed in order to generate the numerical observations. It is an easy way to generate model errors.

4.3.1. Impact of Observational Errors

[72] The observational error has in practice two components: A first component is related to the errors made in the measurement process; a second component is the so-called error of representativeness and is related to the mismatch between the observation resolution (for instance, a point measurement) and the model resolution (for instance, a cell of 1 km by 1 km). The error of representativeness is probably the most important part, for chemical data, because of the heterogeneity of the chemical species concentrations in the vicinity of sources at small scales.

[73] A perturbation is applied to the model outputs in order to parameterize the observational error. This error is assumed (1) to be uncorrelated between chemical species and between different spatial locations; (2) to be correlated in time in order to avoid unrealistic fluctuations of observational errors; (3) to be larger for NO than for NO2 and O3 (with a ratio of 2) because of the local nature of NO. Notice that this is a value chosen arbitrarily for these twin experiments.

[74] Thus the observational error εn at time n is generated in the following way:

equation image

with βn a Gaussian perturbation whose standard deviation is related to the concentration value (see Table 1).

[75] A typical example of such an observational error is plotted in Figure 10. The impact is low for the optimized set of parameters (see Figure 11).

Figure 10.

Observational errors generated for a given monitoring station in Lille for 11 May.

Figure 11.

NO emissions: reference values and values obtained with a random observational error.

4.3.2. Impact of Model Errors

[76] Three kinds of model errors can be distinguished: (1) The first is the errors related to forcing fields (meteorological data, dry deposition parameterizations, boundary conditions, etc.). This corresponds in practice to all input parameters that may be uncertain but that are not optimized. (2) The second is the errors related to the model itself: For instance, the segregation effects are neglected and the kinetic rates are used as if the fields were well mixed. (3) The third is the errors related to numerical algorithms: For instance, the numerical scheme used for advection may induce large numerical diffusion.

[77] These model errors lead to discrepancies between the model outputs and the observational data. A minimization of the cost function with respect to the emission parameters may be only a fit to observational data while the true reason for a large value of the cost function may be related to a large model error. For instance, a reduction of NOx emissions may have a similar effect as an increase in the dry deposition velocities of NOx.

[78] It is therefore important to assess the impact of model errors on the quality of the inverse modeling. In practice, the model errors are parameterized by applying a Gaussian perturbation to the input parameters listed in section 3, except for the emissions of NOx. The uncertainties are supposed to follow a Gaussian law with a variance of 50%, which is coherent with the values given by Hanna et al. [2001]. Notice that the model error is supposed to be unbiased.

[79] The minimization of the cost function leads to a reduction of 85%. The new emission profile is plotted in Figure 12. It is noteworthy that the period 0600–0700 UT in the morning is highly sensitive to model errors.

Figure 12.

Optimized emissions of NO: reference values and values obtained with a model error.

4.4. Brief Summary of the Twin Experiments

[80] The preliminary case with the numerical observations has led to the following conclusions: (1) The numerical algorithms (including the adjoint model) are validated. (2) The quality of the optimized parameters is not strongly lowered by observational errors and model errors. The next section describes the application to the real case over Lille and investigates the quality of the optimized parameters through second-order sensitivities.

5. Application to a Real Case

[81] The observations are now provided by the monitoring network AREMA. The forward model is effective for modeling O3 and NO2 but is not able to give an accurate forecast of NO. We have therefore decided to perform inverse modeling with and without the observations of NO. In a first experiment, 4 observations of O3 and 10 observations of NO2 are available per hour. The uncertainties in the observations are supposed to be Gaussian. Their standard deviations are described in Table 1.

[82] Our approach is summarized in the following way: (1) The time distribution of α(t) is optimized during a learning period (typically one week). (2) The improvement in the emission inventory is checked by using the optimized set of parameters during a verification period (typically a few weeks after the learning period).

5.1. Inverse Modeling From 11 May to 15 May

[83] The learning period is the week from 11 May to 15 May. Two kinds of experiments have been performed in order to estimate the daily variability of the optimized distribution: (1) In a first approach, each day is independently used as a learning period (5 learning periods), which leads to 5 sets of optimized parameters. (2) In a second approach, the week (actually 5 days, the weekend being excluded) is used as a learning period as a whole, which leads to a unique set of optimized parameters.

[84] In practice, a simulation period of 6 hours has been added before the beginning of the periods in order to take into account the model spin-up and to lower the influence of the initial conditions.

[85] The optimized parameters α are plotted in Figure 13 for the different learning periods defined above. For the first experiment (learning periods of 1 day), the convergence of the minimization algorithm (with a strong requirement for the stopping test) is obtained after a number of iterations ranging from 59 to 128. The reduction of the cost function ranges from 25% to 66% (see Table 2). The case of the global learning period (second case) requires 42 iterations and leads to a reduction in the cost function of 20%.

Figure 13.

Daily distribution of the optimized parameters α for the learning periods from 11 May to 15 May.

Table 2. Values of the Cost Function Before and After Optimizations and Performance of the Convergence of the BFGS Algorithm for the Different Learning Periods
 11 May12 May13 May14 May15 May11–15 May
Initial cost function, ×10−55.389.507.653.852.9616.
Optimized cost function, ×10−52.553.034.402.912.0612.8
Iterations of BFGS10112859645542

[86] The runs for the 1-day learning periods illustrate the rather high variability of the optimized parameters. However, there are some similar features: (1) The coefficients corresponding to hours 7 and 8 are greater than 1. (2) The coefficients corresponding to hours 17, 18 and 19 are lower than 1.

[87] These two periods are highly sensitive since they are related to the emissions peaks but also to key evolutions of other processes (photolysis and vertical mixing).

[88] The date of 12 May is not well modeled: The initial cost function is large. We suspect then that other parameters have wrong values and that the model error (as defined above) is large.

[89] There is also an overestimation of α in the night from Sunday to Monday and in the night from Friday to Saturday. One possible realistic reason could be the overestimation of traffic jams related to weekends.

[90] The plots labeled “11–15 May (Mean)” are related to the average of the 5 optimized sets of parameters obtained with a 1-day learning period. Notice that the distribution is similar to the distribution obtained with a 5-day learning period, labeled “11–15 May (Simulated).”

[91] The time distribution obtained for the emissions is then plotted in Figure 14. One key remark is the lack of symmetry between the morning and the end of the afternoon. There are many possible reasons for that: This could be due to model errors (for instance, the extension of the mixing layer in the morning); another reason that could explain these features is related to different regimes of emissions (cold emissions in the morning) that are perhaps not well represented by the emission inventory.

Figure 14.

Daily distribution of NO emission for the period 11–15 May. Reference and optimized parameters.

5.2. Verification

[92] The improvement in the emission inventory has been checked by applying the optimized time distribution to other weeks than the learning week. The root mean square (RMS) errors and the correlations computed for the learning week and the two weeks after are given in Table 3. The forecast skills are improved with the exception of the RMS for NO during the week of 25–30 May.

Table 3. RMS Errors and Correlations Before and After Optimization of the Time Distribution α during the Week of 11–15 Maya
 11–15 May18–13 May25–30 May
ReferenceOptimizedReferenceOptimizedReferenceOptimized
  • a

    Correlations are given in parentheses. “Reference,” before optimization; “optimized,” after optimization.

O332.7 (0.83)29.6 (0.87)19.9 (0.82)18.9 (0.85)20.8 (0.61)18.1 (0.69)
NO229.4 (0.36)25.5 (0.52)19.9 (0.49)18.4 (0.59)19.4 (0.24)17.4 (0.29)
NO27.4 (0.61)26.6 (0.66)19.7 (0.49)17.7 (0.66)20.3 (0.38)21.5 (0.40)

5.3. Use of Other Learning Periods

[93] The variability of the optimized parameters with respect to the learning period is a way to investigate the robustness of the approach. As one can assume that there is no key reason justifying that the time distribution has a drastic evolution from one week to another, this is more or less equivalent to investigating the sensitivity to meteorological conditions.

[94] The optimized parameters for three different learning periods (the week of 11–15 May and the two weeks after) are plotted in Figure 15. Similar behaviors are obtained for the three learning periods: high values in the morning and low values at the end of the afternoon. The fact that the results are not highly sensitive to the learning period is an indication that the approach is robust.

Figure 15.

Optimized time distributions for three different learning periods.

5.4. Second-Order Sensitivity

[95] The results of the inverse modeling procedure depend on many parameters that may be uncertain: the meteorological conditions, the parameterizations, other emissions, etc. They also depend on some parameters referred to as “assimilation parameters”: for instance, the first guess, the covariances matrices (if any, which is not the case in this study) or the observational operator.

[96] Mathematically speaking, the cost function J can be written in a more general form as J(α, kp, ka) where kp stands for the physical parameters supposed to be known (but uncertain) and ka for the assimilation parameters.

[97] The optimized values αequation image are given by

equation image

which defines a function αequation image(kp,ka). The second-order sensitivity deals with the sensitivity of these optimized parameters with respect to kp (robustness with respect to physical parameters) and ka (impact of the monitoring network, typically).

[98] A comprehensive way to assess this sensitivity is to compute the partial derivatives of αequation image with respect to kp and ka. This may be done with the Hessian matrix of J [Le Dimet et al., 2002].

[99] It is of course a huge task to develop this Hessian matrix when J is related to 3D comprehensive models, such as Polair3D. Even if this second-order mode is available [Sportisse and Quélo, 2003], a simpler approach has been used by computing the optimized results αequation image with perturbed values of some assimilation parameters and of physical parameters. More precisely, the study is restricted to the impact of the first guess, the impact of NO observations and the impact of Kz (which describes the mixing layer).

5.4.1. Impact of the First Guess

[100] A first experiment has been performed in order to check the sensitivity with respect to some assimilation parameters, such as the first guess for α. As said before, there was no penalty term for the cost function (usually referred to as a background term) because the problem is well posed and does not require a regularization technique. What we call here a “first guess” is the choice of the reference values for α, that is, the initial values used for starting the minimization algorithm.

[101] Two choices for the first guess have been compared: The first guess is either 1 (that is to say we start from the time distribution given by the emission inventory) or is 0 (lack of information for the time distribution of emissions at the beginning).

[102] A key result is that the same distribution is obtained after optimization (see Figure 16). The number of iterations required for convergence is however larger for the second case as the starting point is far from the optimal values (27 iterations against 22 in the first case). This indicates that the cost function has a strongly convex behavior in a large domain around the optimal set of parameters.

Figure 16.

Optimized time distribution for two sets of first guesses for α (1 or 0).

[103] This result indicates that the optimized time distribution may be recovered without any use of the a priori information given by the emission inventory.

5.4.2. Sensitivity With Respect to the Observed Species

[104] Up to now, the observations of NO are not taken into account, and the impact of these observations is investigated in this section.

[105] The observational errors for all species are supposed to be the same ones (even if this is probably not the case: NO has, for instance, a larger error of representativeness because of its local nature). The results are plotted in Figure 17 with a similar qualitative behavior as before, even if the overestimation in the morning is increased. This is consistent with the strong underestimation of NO concentrations in the morning.

Figure 17.

Optimized distribution by adding observations of NO. The reference curve stands for the optimized α without NO observations.

5.4.3. Sensitivity With Respect to Vertical Mixing

[106] The sensitivity at the beginning of the morning has already been mentioned. This corresponds to the transition from the nocturnal stable boundary layer to a mixed layer. It is however well recognized that the parameterization of Kz has a key impact on model outputs. Another reason could be the impact of sunset through photolysis but our tests (not reported here) do not indicate a strong sensitivity.

[107] In order to assess the sensitivity of the results with respect to Kz, we have artificially decided to shift forward the time distribution of Kz by 1 hour. The start of the extension of the mixed layer then occurs 1 hour after the reference case.

[108] The optimized parameters are plotted in Figure 18 for 1 day (11 May). As expected, the main differences are obtained during the transition periods in the morning and in the afternoon. The time distribution is however not highly modified, which confirms the robustness of the result.

Figure 18.

Optimized distribution for 11 May: with the reference Kz and with a shifted Kz.

6. Conclusions and Future Work

[109] A variational approach has been used in order to perform inverse modeling of emissions at regional scale, over Lille in northern France. After a sensitivity analysis, we have chosen to optimize the time distribution of NOx on the basis of observations of O3, NO2 and NO.

[110] Twin experiments have proven the validity of the numerical models (especially the adjoint model of our chemistry-transport model, Polair3D, obtained by automatic differentiation). The impact of observational errors and of model errors have also been investigated through numerical tests.

[111] The application to one week of May 1998 has led to an optimized set of parameters. A verification test (by applying the optimized distribution to the next two weeks) has confirmed the improvement of the forecast skills.

[112] A brute force second-order sensitivity analysis has also been performed in order to check the robustness of the optimized parameters with respect to other uncertain parameters (first guess, meteorological conditions, Kz). The results indicate that the optimized time distribution is robust.

[113] Future work will be devoted to the application of such techniques at continental scale (with a focus on spatial distribution rather than time distribution). Another key point should also be to take into account model errors in the inverse modeling process. Many approaches are under investigation, ranging from combined inverse modeling of other parameters than emissions to weak formulations of the variational problem or Monte Carlo simulations of the inverse modeling procedure.

[114] The selection of the control parameters (the hourly coefficients for NO) is a potential limitation of the data assimilation procedure and future works should focus on an augmented set of control parameters.

Acknowledgments

[115] We thank the PREDIT program and the CETE institute for providing us the emission inventory for road traffic used in this paper. We also thank Rémy Lagache for fruitful discussions about emission inventories.

Ancillary