The purpose of this article is to perform the inverse modeling of emissions at regional scale for photochemical applications. The case study is the region of Lille in northern France for simulations in May 1998. The chemistry-transport model, Polair3D, has been validated with 1 year of model-to-observation comparisons over Lille. Polair3D has an adjoint mode, which enables inverse modeling with a variational approach. A sensitivity analysis has been performed so as to select the emission parameters to be modified in order to improve ozone forecasts. It has been shown that inverse modeling of the time distribution of nitrogen oxide emissions leads to satisfactory improvements even after the learning period. A key issue is the robustness of the inverted emissions with respect to uncertain parameters. A brute force second-order sensitivity analysis of the optimized emissions has been performed with respect to other parameters and has proven that the optimized time distribution of NOx emissions is robust.
If you can't find a tool you're looking for, please click the link at the top of the page to go "Back to old version". We'll be adding more features regularly and your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 Emission inventories used in air pollution modeling admit a large range of uncertainties [e.g., Hanna et al., 2001]: (1) The spatial distribution of emissions is not always well known and may be highly heterogeneous; (2) The time distribution of emissions is strongly related to variable parameters (such as traffic conditions, biogenic activity); (3) The chemical distribution is also uncertain: The relations between the chemical species given by the emission inventories, the “real chemical species” and the “model species” (the species described in chemistry-transport models) are often questionable.
 A growing field of interest is therefore the inverse modeling of emissions (more precisely, of parameters related to the emissions) on the basis of a combined use of model outputs and observational data (provided by monitoring networks). These topics belong to the larger domain of data assimilation.
 Moreover there are further reasons to support these approaches. One may be interested in estimating the emissions of a given sector or of a given country in order to check the fulfillment of a regulatory agreement. This is typically the case for the gases implied in the Greenhouse effect at global scale or for the pollutants regulated by the Long Range Transport of Air Pollution Protocol (LRTAP) over Europe.
 An increasing number of works has been devoted to these topics in recent years. At global scale, passive tracers or weakly reactive species, such as CO or CH4, have already been studied. One can refer, for instance, to the work of Kaminski , Bergamaschi et al. , and Bousquet et al. . In the case of linear tracers, such as radionuclides, many methods have already been proposed, following the Chernobyl accident and the ETEX campaign [Hourdin and Issartel, 2000].
 The situation is quite different for reactive chemistry-transport models, as the dependence of concentrations on emissions is nonlinear. Moreover the models are characterized by high-dimensional systems (whose dimension is given by the number of chemical species in the chemical mechanism). One can refer to the work of Elbern et al.  for academic studies and to Elbern and Schmidt , Mendoza-Dominguez and Russell , and van Loon et al.  for examples at continental scales. A few works have been devoted to the inverse modeling at regional scales: We can refer, for instance, to Chang et al.  for the inverse modeling of biogenic isoprene emissions over Atlanta with the use of a Kalman filter. Another work is that of Mendoza-Dominguez and Russell  with a linearized method applied to Atlanta, as well.
 The purpose of this paper is to perform inverse modeling of emissions for air pollution applications at regional scale. A key issue, which is not often investigated, is the robustness of the inverted parameters: How to be sure that the results are not only “fits” of the model outputs to the data? What is the quality of the new emission parameters? How sensitive are these parameters with respect to other uncertain parameters (supposed to be known)?
 We have investigated these issues with an application to northern France, over the region of Lille. A comprehensive 3D chemistry-transport model, Polair3D [Boutahar et al., 2004], has been validated by comparisons to measured data over 1 year (1998 in the study by Quélo ). A sensitivity analysis has also been performed in order to choose the relevant parameters for inverse modeling. The choice was made to perform inverse modeling of the time distribution of NOx emissions. The emissions of NOx are the emissions that have, at first order, the greatest impact on ozone in this case. Their time distribution is not well known (contrary to the spatial distribution, an exhaustive emission inventory having been built in the framework of the French Research Program PREDIT, a program for research, experimentation and innovation in land transport).
 Inverse modeling has been performed through variational methods, Polair3D having an adjoint mode. The learning database is composed of data from the week of 11–15 May. The use of the modified emission inventory improves the simulated results of the two following weeks. Moreover the robustness of the inversion has been investigated.
 This paper is organized as follows. The second section is devoted to the general presentation of the case study. In the third section, some preliminary tests are made in order to assess the sensitivity of model outputs with respect to emission data. The cost function, that measures the discrepancy between observations and model outputs, is also studied. In the fourth section, the inverse modeling approach is presented and twin experiments (on the basis of numerical data) are performed in order to validate the numerical models. The impact of uncertainties (for instance, model errors) is investigated. In the fifth section, the real case is studied with the inversion of a time distribution for NOx emissions. A posteriori verifications and robustness tests are also performed in order to assess the quality of the optimized parameters.
2. Case Study
 Lille is part of a dense urban area in northern France that includes many intermediate cities. Therefore the main part of pollution comes from local anthropogenic activities. Pollution may also result from nonlocal sources since Lille is sometimes located in the plume of highly polluted areas (Paris, Ruhr, London). In order to take into account this plume, a simulation at European scale is first performed and the simulation domain over Lille is nested in it.
2.1. Brief Overview of the Chemistry-Transport Model: Polair3D
 Polair3D [Boutahar et al., 2004] is a comprehensive 3D Eulerian chemistry-transport model developed at CEREA (laboratory at École Nationale des Ponts et Chaussées and the Research and Development Division of Électricité de France). It is one part of the Polyphemus modeling system [Mallet et al., 2005] (also developed at CEREA and available under the GNU General Public License at http://www.enpc.fr/cerea/polyphemus/), notably devoted to impact studies, forecasts and data assimilation for the atmospheric dispersion of chemical species and radionuclides. Within this system, Polair3D is mainly responsible for the time integration of the chemistry-transport equation. It includes several chemical mechanisms, including RACM [Stockwell et al., 1997], which was chosen for this study. The other components of Polyphemus provide the input fields to Polair3D (meteorological fields, deposition velocities, etc.), computed using relevant physical parameterizations.
 One of these components is the library AtmoData [Mallet and Sportisse, 2005] which gathers the physical parameterizations. Thanks to this library and the programs available with it (themselves part of Polyphemus), the following fields were computed for this study: (1) the meteorological fields extracted from ECMWF data; (2) the vertical diffusion coefficients computed with Louis' parameterization [Louis, 1979]; (3) the cloud attenuation computed in the same way as given by Chang et al.  and Madronich ; (4) the deposition velocities with Wesely's parameterization [Wesely, 1989]; (5) the anthropogenic emissions from the EMEP inventory, following Middleton et al. , at European scale; at regional scale, the emissions are generated as explained in section 2.3; (6) the biogenic emissions computed on the basis of Simpson et al. ; (7) the boundary conditions (for the European simulation) extracted from a MOZART 2 [Horowitz et al., 2003] simulation over a typical year.
 All these fields, that appear in the chemistry-transport equation, are then available to Polair3D which in turn integrates the equation in time with efficient numerical schemes. It uses a first-order splitting method in which the advection is integrated first, then the diffusion and finally the chemistry. The advection scheme is a third-order direct space-time scheme with a Koren flux limiter [Verwer et al., 1998]. The diffusion and the chemistry are both integrated with a second-order Rosenbrock method which is suitable for stiff problems and whose implicitness enables the use of large time steps (in this study, 600 s at both European scale and regional scale). As for chemistry, to enforce the computational efficiency, the sparsity of the Jacobian matrix involved in the Rosenbrock method is taken into account [Sandu et al., 1996].
 The simulations performed with Polair3D have proven to be reliable. At European scale, the model has been validated notably over the year 2001. Details about this validation are given by Mallet and Sportisse . For instance, the comparison with measurements from May to August 2001 of ozone peaks at 242 stations (27,000 measurements) gives a root mean square of 22.7 μg m−3 and a correlation of 72.7%. At regional scale, the results are also satisfactory as shown in the following sections.
 One should note that the modeling system Polyphemus provides data sets and parameterizations consistent with the current knowledge in physics. Numerical adjustments that are not supported by physics were discarded even if they could lead to better results. This is the only means to ensure that the inverse modeling of emissions makes sense, so that the retrieved emissions have a chance to be closer to the real emissions.
 The last point to be emphasized is the availability of a tangent linear mode and an adjoint mode of Polair3D with respect to virtually any input parameter, including the emissions. This feature is provided by automatic differentiation [Mallet and Sportisse, 2004].
 The domain covers the region of greater Lille, over a 21 km × 24 km domain. The center of the domain is the city of Lille. It is discretized with a 1 km × 1 km horizontal grid and 9 vertical levels ranging from the ground to 3000 m in order to include the atmospheric boundary layer. The height of the first layer is 30 m and the thickness of the other layers ranges from 120 m to 510 m.
2.3. Emissions Over Lille
 The anthropogenic emissions come from several databases: (1) The EMEP inventory for VOCs, NOx, SO2 and CO is the result of a “top-down” approach and is available in annual totals over a 50 km × 50 km grid for each activity sector (Selected Nomenclature for Air Pollution (SNAP)). (2) Traffic emissions are delivered by the Centre d'Étude Technique de l'Équipement (CETE institute) over the Lille regional administrative area on the basis of the road locations, the distribution of vehicle categories and the emission factors from the standard COPERT III European methodology. (3) The total annual emissions of major industrial sources are collected by the Direction Régionale de l'Industrie, de la Recherche et de l'Environnement (DRIRE institute): Within the Lille area, 20 point sources are taken into account including production processes and waste treatment.
 Emissions from the EMEP inventory are replaced with the detailed data provided by the two last databases whenever possible. A large part of the NOx emissions come from the traffic (about 60%) and are therefore well described by our inventory. This is highly valuable according to the sensitivity of photochemical concentrations to the NOx emissions.
 The EMEP annual totals are first mapped to the simulation grid. Because of the coarse resolution of the EMEP inventory, the spatial distribution is mainly determined by the land use coverage in order to associate the major part of the emissions with urban areas. Thus, within an EMEP cell, the emissions are distributed so as to give an appropriate weight to the urban areas, the forests and the other areas (respective weights of 12, 1.6, and 1). The two other databases are already accurately distributed and their horizontal grid matches the simulation domain.
 The emissions are then vertically distributed to take into account the stack heights and the elevation due to the high temperature at release time.
 Annual EMEP emissions and point sources are distributed monthly, weekly and hourly according to coefficients provided by GENEMIS (Eurotrac-2 subproject, http://www.gsf.de/eurotrac/) for each emission sector. The time distribution of road traffic emissions is computed on the basis of daily and hourly coefficients derived from traffic activity measurements.
 It is assumed that the NOx emissions are composed of NO (10%) and NO2 (90%). The NMVOC emissions are computed following Middleton et al. .
2.4. Monitoring Network
 The locations of the stations of the monitoring network AREMA are plotted in Figure 1 for ozone and in Figure 2 for NOx (one measurement of NO and NO2 per station). The network includes urban and suburban stations and delivers hourly measurements.
2.5. Validation Over Lille
 Polair3D has been used in order to simulate air quality over Lille for the year 1998. The boundary conditions have been provided by continental runs of Polair3D. We refer the reader to Quélo  for a more detailed description of the validation.
 The model outputs have been compared to measured data provided by the local monitoring network, AREMA. Output concentrations have been interpolated to the locations of the monitoring stations. Statistical measures (root mean square, bias, correlation) have been computed with hourly data for three species: O3, NO2 and NO. Polair3D has shown a satisfactory agreement between simulated concentrations and observations [see Quélo, 2004]. In particular, the correlation for NO2 is above 47% at all monitoring stations except one. This indicates that Polair3D reproduces well the spatiotemporal variability of NO2 concentrations.
2.6. Setup of the Inverse Modeling Case
 The setup of the modeling case is the same as for the forward simulations. The full simulation system has been used, without any limitations in the physics or in the numerical schemes.
 The study is focused on the three weeks after 11 May. The average wind field at the ground is plotted in Figure 3 for 1 day (11 May). For this day, the average wind velocity is of the magnitude 2 m s−1, which corresponds to a residence time of 3 hours for the pollutants in the simulation domain. This situation is quite representative for this month.
3. Some Preliminary Tests: First-Order Sensitivity Analysis
 In an inverse-modeling experiment, the first step is to choose the parameters to be inverted. First-order sensitivity analyses determine the parameters with large enough sensitivities.
3.1. Sensitivity Analysis of the Cost Function
 The purpose of this section is to investigate the sensitivity of a cost function, that describes the discrepancy between model outputs and observational data, with respect to input parameters for Polair3D. Similar studies have already been performed but they have not been devoted to cost functions (for instance, Segers  over England and Menut  over Paris). This is a key step in order to assess the feasibility of inverse modeling.
3.1.1. A Few Notations
 In the following, we consider that the model outputs of Polair3D are of the form:
c is a vector of space- and time-distributed chemical concentrations and is called the “state vector.” k is the vector of all the input parameters to the chemistry-transport model (see below for examples).
 The observations may be compared to c thanks to an observation operator, which is usually written as H. In our case (ground observations), H is a projection matrix, which maps the vector c to the values of several chemical concentrations at the monitoring stations (at given spatial locations and given dates). Hc is therefore the vector of observations deduced from the state c: (Hc)i is the ith observation, where i is an index labelling time, space and chemical species. obsi represents the measured concentration for species O3, NO and NO2 (given by the monitoring network AREMA).
 The cost function is defined in order to estimate the discrepancy between the observations (obs) and the numerical results provided by the model (Hf(k)). It is usually written in the following form:
where R is the observational error covariance matrix.
 In order to perform a sensitivity analysis, the input parameters k is multiplied by a scalar numerical parameter α. Typically, α ranges from 0.5 to 1.5 in order to model perturbations of magnitude 50% for k. The reference case is given by α = 1. The cost function is then a function of α with given values of the input parameters k (defined in the reference case):
Hereafter, we assume that the observations are uncorrelated and that the observation error variances are the same ones for all observations (O3, NO and NO2).
 The relative standard deviation for observational errors is usually estimated to be twice larger for NO2 than for O3 (see Table 1). In our case, the average concentration of O3 is twice larger than the average concentration of NO2. The absolute standard deviations for observational errors are therefore roughly the same ones (the absolute standard deviation is the product of the relative standard deviation by the mean value).
Table 1. Standard Deviation of Observational Errors Related to the Concentration Valuea
Concentrations and standard deviations are given in ppb. Data from Blond .
 The situation is different for NO. Roughly speaking, in our case, the average concentration of NO is six times less than the average concentration of O3. The key point is, however, the high representativity error associated with NO that is probably much higher than for O3. By taking R = I, we have implicitly set the ratio between the two resulting observation errors to 6. This test is then only a “numerical test” with default values (because of the lack of knowledge for these parameters).
 In practice, R = I, the identity matrix. The cost function is then a spatial and time average, over the monitoring network and over 1 day (for this section), respectively.
 The evolution of J with respect to the perturbation parameter α has been computed for several input parameters k (Figure 4): (1) the lateral boundary conditions provided by the continental simulation for NO, NO2, O3 and the remaining species; (2) the vertical eddy coefficient Kz; (3) the kinetic rate of the reaction O3 + NO → NO2 (in order to quantify the segregation effects on chemical kinetics); (4) the emission fluxes for the primary pollutants NO, NO2, CO, SO2 and VOC; (5) the dry deposition velocities for NO2, O3 and the remaining species; (6) the attenuation coefficient for photolysis (parameterizing the effects of clouds).
 The simulations have been performed over 1 day (11 May). The results depend on the chosen day but are representative of the typical sensitivity levels (results not reported here). In the following, the sensitivity is plotted with respect to a constant variation of α ranging from 0.5 to 1.5, which may not correspond to the actual uncertainties for the whole parameters. Even if the magnitude of these uncertainties may be more or less known (see [Hanna et al., 2001] for instance), we have chosen to quantify the impact of similar perturbations in the inputs. Notice that the scale of each figure has been adapted to the values of the cost function.
 As ozone is a regional/continental pollutant, it is logical to get a large impact of ozone boundary conditions. On the other hand, the impact of NOx boundary conditions is much weaker (Figure 4a).
 The dependence on Kz at the first level (30 meters) or strictly above are indicated in Figure 4b. As expected, the sensitivity with Kz is higher at 30 m than above.
 The segregation effect has been parameterized by multiplying the kinetic rate of the formation of NO2 from O3 and NO. The impact is plotted in Figure 4c. Notice that the real value is highly uncertain (the segregation effect is usually not described by comprehensive 3D chemistry-transport models).
 The sensitivity with respect to emissions is illustrated in Figure 4d. A key result is the strong impact of NO emissions.
 The dry deposition velocity of O3 is also a key parameter as compared to the other deposition velocities (Figure 4e). The parameterization for cloud attenuation of photolysis is also not well known, because of the difficult diagnosis of clouds; hence the sensitivity plotted in Figure 4f (that has a quite low value as compared to other ones) may be underestimated.
 To summarize, these preliminary results emphasize the impact of the boundary conditions for ozone and the impact of NOx emissions.
 Notice that the cost function J does not systematically admit a local minimum in the range [0.5,1.5] for α.
3.2. Time and Space Impact of NOx Emissions
 A key issue for inverse modeling is to reduce the dimension of the control space (that is to say the number of degrees of freedom to be optimized). The emission data is given by 3D fields (two dimensions for surface emissions and one dimension for time) for every emitted species, which gives a very large number of parameters. According to the previous tests, the emission data that have the greatest impact are the NOx emissions. Hence the control space should be reduced by first discarding all other emitted species.
 To further reduce the control space, we now investigate the space and time impact of given NOx emissions: (1) For the spatial impact, the NOx emissions of the grid cell (15,10) (arbitrarily chosen) are perturbed by +30% at each time step. The differences in the daily averages are then computed for species O3, NO and NO2 (Figure 5). (2) For the time impact, a perturbation of +30% is applied at 0300 UT to NOx emissions in all grid cells. We then compute the time evolution (on an hourly basis) of the differences in the spatial averages.
 The impact of a perturbation in NOx emissions is highly local in space. The largest difference for the three output species (NO, NO2, O3) is located in the grid cell where the emission occurs. In the vicinity of this cell, the difference is reduced by a factor of 20 for NO and by a factor of 6 for NO2 or O3. A few kilometers further, the emissions of NOx have no more impact.
Figure 6 illustrates how long a perturbation in NOx emissions has an impact on the monitored concentrations. After 3 hours, the impact may be neglected. This means that an observation may give some information for emissions in the 3 previous hours. This may be partially related to the residence time in the domain. Notice that these results cannot be taken as a general result and that they are specific for this case.
4. Twin Experiments for Inverse Modeling of Emissions
 From the previous section, it appears that NOx emissions have a strong impact and are suited for inverse modeling experiments. In this section, further selection of the parameters to be inverted is based on the uncertainty associated with NOx emission parameters.
 Twin experiments enable to validate the data assimilation system and to draw preliminary conclusions related to the consequences of observation errors and model errors.
4.1. Some Notations
4.1.1. Control Variables
 An accurate emission inventory has been made over Lille, in the framework of the French Program PREDIT (program of research, experimentation and innovation in land transport) in order to evaluate the health impact of emissions (http://www.certu.fr/doc/env/echange/predit/predit.htm). The spatial distribution of emissions is therefore assumed to be fairly accurate. The situation is quite different for the time distribution, which is given by monthly, daily and hourly coefficients. These coefficients are derived from average situations.
 We have thus chosen, as control parameters, the time distribution of the most sensitive emitted species, namely NOx. The emissions are parameterized by
where NOx(x, t) are the emissions given by the emission inventory and α(t) are hourly coefficients applied to emissions over the whole domain. Therefore inverse modeling of α(t) for 1 day (24 coefficients) is performed. In the reference case, α(t) = 1. The focus has been put on weekdays (that have different emissions as compared to weekends).
 The choice of these control parameters is ruled by (at least) two criteria: (1) The impact on the cost function has to be large enough (see section 3). (2) They are supposed to be valuable for other days. Notice that the time distributions for the weekdays are similar (Figure 7). The purpose is then to get an improved time distribution of the representative NOx emissions for a weekday.
4.1.2. Choice of the Cost Function
 Inverse modeling requires the specification of error statistics for three kinds of data-model outputs, measurements and control parameters (for instance, the background terms). It is usually assumed that the model is perfect and we follow the same assumption.
 The estimation of the emission parameters may be an ill-posed problem, which may require the use of penalty terms in the cost function (for instance, through a Tikhonov's regularization). In our case, the number of observational data is large enough. In order to avoid a sensitivity with respect to doubtful background terms, the cost function only describes model-to-data discrepancies. This approach is specific to our case.
 Moreover, the observational error is assumed to be constant over all the monitoring stations.
 On the basis of these assumptions, the cost function is
where i labels the time, space and chemical “position” of the observations. The dependence of the model output is only written with respect to NOx emissions. The model is taken as a strong constraint (no model error), a constant weight is given to all observations and no background term is included.
 The objective is then to minimize J(α) with respect to α.
4.1.3. Numerics and CPU Performance
 This minimization problem has been solved by the iterative algorithm BFGS [Byrd et al., 1995], which belongs to the family of gradient algorithms. This requires to have at our disposal the gradient ∇αJ.
 As previously mentioned, Polair3D has been built in order to have an adjoint mode easily available through automatic differentiation [Mallet and Sportisse, 2004]. Polair3D is written in Fortran 77 and may therefore be automatically differentiated by O∂yssée (developed at Inria; Faure and Papegay ). To reduce the computational costs, only the differentiated LU factorization and solver for the chemistry are replaced by hand. The main constraint is to make the appropriate calls to the differentiated code and to check the validity of the adjoint model with finite differences comparisons (the so-called Taylor's tests).
 The ratio of the CPU time needed for the adjoint computation (21 minutes per day with a 3 GHz Pentium IV processor) to the CPU time needed for the forward computation (about 3 minutes) is approximatively 7. This ratio is not optimal but this is not an issue in this case.
 Each iteration of the procedure needs many evaluations of the cost function and of the gradient. For instance, 50 iterations of BFGS require 20 hours of CPU time.
 No stopping criterion was used. As the application was not in an operational context, the iterative algorithm was stopped after convergence. Notice that the convergence is mainly reached during the first steps of the process and that it is possible to limit the number of iterations without lowering the accuracy of the results (see below).
4.2. Twin Experiments Without Perturbations
 The first purpose is to validate the numerical algorithms (gradient computation, minimization) on the basis of twin experiments without perturbations. It consists in using numerical data as observational data. These numerical observations are generated on the basis of “true” values for the control parameters: αt (t stands for true). These values are retrieved from a first guess in the minimizing algorithm αb (b stands for background).
 In practice, αit = 1 and we have chosen to have 30% of underestimation for the first guess: αib = 0.7.
 The evolution of the cost function with respect to the iterations of BFGS is plotted in Figure 8. The true parameters are recovered after optimization up to round-off errors (see, for instance, Figure 9 for the parameter α8).
4.3. Twin Experiments With Perturbations
 The former experiment has been done without any perturbations and has checked the validity of the numerical tools. Before applying the approach to a real case it is however necessary to evaluate the ability of the system to deal with perturbations. The underlying issue is to estimate the quality of the results.
 Two experiments have been performed: (1) The first experiment is related to observational errors: The numerical observations are therefore perturbed. (2) The second experiment is related to model errors: Some input parameters (see below for the details) are perturbed in order to generate the numerical observations. It is an easy way to generate model errors.
4.3.1. Impact of Observational Errors
 The observational error has in practice two components: A first component is related to the errors made in the measurement process; a second component is the so-called error of representativeness and is related to the mismatch between the observation resolution (for instance, a point measurement) and the model resolution (for instance, a cell of 1 km by 1 km). The error of representativeness is probably the most important part, for chemical data, because of the heterogeneity of the chemical species concentrations in the vicinity of sources at small scales.
 A perturbation is applied to the model outputs in order to parameterize the observational error. This error is assumed (1) to be uncorrelated between chemical species and between different spatial locations; (2) to be correlated in time in order to avoid unrealistic fluctuations of observational errors; (3) to be larger for NO than for NO2 and O3 (with a ratio of 2) because of the local nature of NO. Notice that this is a value chosen arbitrarily for these twin experiments.
 Thus the observational error εn at time n is generated in the following way:
with βn a Gaussian perturbation whose standard deviation is related to the concentration value (see Table 1).
 A typical example of such an observational error is plotted in Figure 10. The impact is low for the optimized set of parameters (see Figure 11).
4.3.2. Impact of Model Errors
 Three kinds of model errors can be distinguished: (1) The first is the errors related to forcing fields (meteorological data, dry deposition parameterizations, boundary conditions, etc.). This corresponds in practice to all input parameters that may be uncertain but that are not optimized. (2) The second is the errors related to the model itself: For instance, the segregation effects are neglected and the kinetic rates are used as if the fields were well mixed. (3) The third is the errors related to numerical algorithms: For instance, the numerical scheme used for advection may induce large numerical diffusion.
 These model errors lead to discrepancies between the model outputs and the observational data. A minimization of the cost function with respect to the emission parameters may be only a fit to observational data while the true reason for a large value of the cost function may be related to a large model error. For instance, a reduction of NOx emissions may have a similar effect as an increase in the dry deposition velocities of NOx.
 It is therefore important to assess the impact of model errors on the quality of the inverse modeling. In practice, the model errors are parameterized by applying a Gaussian perturbation to the input parameters listed in section 3, except for the emissions of NOx. The uncertainties are supposed to follow a Gaussian law with a variance of 50%, which is coherent with the values given by Hanna et al. . Notice that the model error is supposed to be unbiased.
 The minimization of the cost function leads to a reduction of 85%. The new emission profile is plotted in Figure 12. It is noteworthy that the period 0600–0700 UT in the morning is highly sensitive to model errors.
4.4. Brief Summary of the Twin Experiments
 The preliminary case with the numerical observations has led to the following conclusions: (1) The numerical algorithms (including the adjoint model) are validated. (2) The quality of the optimized parameters is not strongly lowered by observational errors and model errors. The next section describes the application to the real case over Lille and investigates the quality of the optimized parameters through second-order sensitivities.
5. Application to a Real Case
 The observations are now provided by the monitoring network AREMA. The forward model is effective for modeling O3 and NO2 but is not able to give an accurate forecast of NO. We have therefore decided to perform inverse modeling with and without the observations of NO. In a first experiment, 4 observations of O3 and 10 observations of NO2 are available per hour. The uncertainties in the observations are supposed to be Gaussian. Their standard deviations are described in Table 1.
 Our approach is summarized in the following way: (1) The time distribution of α(t) is optimized during a learning period (typically one week). (2) The improvement in the emission inventory is checked by using the optimized set of parameters during a verification period (typically a few weeks after the learning period).
5.1. Inverse Modeling From 11 May to 15 May
 The learning period is the week from 11 May to 15 May. Two kinds of experiments have been performed in order to estimate the daily variability of the optimized distribution: (1) In a first approach, each day is independently used as a learning period (5 learning periods), which leads to 5 sets of optimized parameters. (2) In a second approach, the week (actually 5 days, the weekend being excluded) is used as a learning period as a whole, which leads to a unique set of optimized parameters.
 In practice, a simulation period of 6 hours has been added before the beginning of the periods in order to take into account the model spin-up and to lower the influence of the initial conditions.
 The optimized parameters α are plotted in Figure 13 for the different learning periods defined above. For the first experiment (learning periods of 1 day), the convergence of the minimization algorithm (with a strong requirement for the stopping test) is obtained after a number of iterations ranging from 59 to 128. The reduction of the cost function ranges from 25% to 66% (see Table 2). The case of the global learning period (second case) requires 42 iterations and leads to a reduction in the cost function of 20%.
Table 2. Values of the Cost Function Before and After Optimizations and Performance of the Convergence of the BFGS Algorithm for the Different Learning Periods
Initial cost function, ×10−5
Optimized cost function, ×10−5
Iterations of BFGS
 The runs for the 1-day learning periods illustrate the rather high variability of the optimized parameters. However, there are some similar features: (1) The coefficients corresponding to hours 7 and 8 are greater than 1. (2) The coefficients corresponding to hours 17, 18 and 19 are lower than 1.
 These two periods are highly sensitive since they are related to the emissions peaks but also to key evolutions of other processes (photolysis and vertical mixing).
 The date of 12 May is not well modeled: The initial cost function is large. We suspect then that other parameters have wrong values and that the model error (as defined above) is large.
 There is also an overestimation of α in the night from Sunday to Monday and in the night from Friday to Saturday. One possible realistic reason could be the overestimation of traffic jams related to weekends.
 The plots labeled “11–15 May (Mean)” are related to the average of the 5 optimized sets of parameters obtained with a 1-day learning period. Notice that the distribution is similar to the distribution obtained with a 5-day learning period, labeled “11–15 May (Simulated).”
 The time distribution obtained for the emissions is then plotted in Figure 14. One key remark is the lack of symmetry between the morning and the end of the afternoon. There are many possible reasons for that: This could be due to model errors (for instance, the extension of the mixing layer in the morning); another reason that could explain these features is related to different regimes of emissions (cold emissions in the morning) that are perhaps not well represented by the emission inventory.
 The improvement in the emission inventory has been checked by applying the optimized time distribution to other weeks than the learning week. The root mean square (RMS) errors and the correlations computed for the learning week and the two weeks after are given in Table 3. The forecast skills are improved with the exception of the RMS for NO during the week of 25–30 May.
Table 3. RMS Errors and Correlations Before and After Optimization of the Time Distribution α during the Week of 11–15 Maya
Correlations are given in parentheses. “Reference,” before optimization; “optimized,” after optimization.
5.3. Use of Other Learning Periods
 The variability of the optimized parameters with respect to the learning period is a way to investigate the robustness of the approach. As one can assume that there is no key reason justifying that the time distribution has a drastic evolution from one week to another, this is more or less equivalent to investigating the sensitivity to meteorological conditions.
 The optimized parameters for three different learning periods (the week of 11–15 May and the two weeks after) are plotted in Figure 15. Similar behaviors are obtained for the three learning periods: high values in the morning and low values at the end of the afternoon. The fact that the results are not highly sensitive to the learning period is an indication that the approach is robust.
5.4. Second-Order Sensitivity
 The results of the inverse modeling procedure depend on many parameters that may be uncertain: the meteorological conditions, the parameterizations, other emissions, etc. They also depend on some parameters referred to as “assimilation parameters”: for instance, the first guess, the covariances matrices (if any, which is not the case in this study) or the observational operator.
 Mathematically speaking, the cost function J can be written in a more general form as J(α, kp, ka) where kp stands for the physical parameters supposed to be known (but uncertain) and ka for the assimilation parameters.
 The optimized values α are given by
which defines a function α(kp,ka). The second-order sensitivity deals with the sensitivity of these optimized parameters with respect to kp (robustness with respect to physical parameters) and ka (impact of the monitoring network, typically).
 A comprehensive way to assess this sensitivity is to compute the partial derivatives of α with respect to kp and ka. This may be done with the Hessian matrix of J [Le Dimet et al., 2002].
 It is of course a huge task to develop this Hessian matrix when J is related to 3D comprehensive models, such as Polair3D. Even if this second-order mode is available [Sportisse and Quélo, 2003], a simpler approach has been used by computing the optimized results α with perturbed values of some assimilation parameters and of physical parameters. More precisely, the study is restricted to the impact of the first guess, the impact of NO observations and the impact of Kz (which describes the mixing layer).
5.4.1. Impact of the First Guess
 A first experiment has been performed in order to check the sensitivity with respect to some assimilation parameters, such as the first guess for α. As said before, there was no penalty term for the cost function (usually referred to as a background term) because the problem is well posed and does not require a regularization technique. What we call here a “first guess” is the choice of the reference values for α, that is, the initial values used for starting the minimization algorithm.
 Two choices for the first guess have been compared: The first guess is either 1 (that is to say we start from the time distribution given by the emission inventory) or is 0 (lack of information for the time distribution of emissions at the beginning).
 A key result is that the same distribution is obtained after optimization (see Figure 16). The number of iterations required for convergence is however larger for the second case as the starting point is far from the optimal values (27 iterations against 22 in the first case). This indicates that the cost function has a strongly convex behavior in a large domain around the optimal set of parameters.
 This result indicates that the optimized time distribution may be recovered without any use of the a priori information given by the emission inventory.
5.4.2. Sensitivity With Respect to the Observed Species
 Up to now, the observations of NO are not taken into account, and the impact of these observations is investigated in this section.
 The observational errors for all species are supposed to be the same ones (even if this is probably not the case: NO has, for instance, a larger error of representativeness because of its local nature). The results are plotted in Figure 17 with a similar qualitative behavior as before, even if the overestimation in the morning is increased. This is consistent with the strong underestimation of NO concentrations in the morning.
5.4.3. Sensitivity With Respect to Vertical Mixing
 The sensitivity at the beginning of the morning has already been mentioned. This corresponds to the transition from the nocturnal stable boundary layer to a mixed layer. It is however well recognized that the parameterization of Kz has a key impact on model outputs. Another reason could be the impact of sunset through photolysis but our tests (not reported here) do not indicate a strong sensitivity.
 In order to assess the sensitivity of the results with respect to Kz, we have artificially decided to shift forward the time distribution of Kz by 1 hour. The start of the extension of the mixed layer then occurs 1 hour after the reference case.
 The optimized parameters are plotted in Figure 18 for 1 day (11 May). As expected, the main differences are obtained during the transition periods in the morning and in the afternoon. The time distribution is however not highly modified, which confirms the robustness of the result.
6. Conclusions and Future Work
 A variational approach has been used in order to perform inverse modeling of emissions at regional scale, over Lille in northern France. After a sensitivity analysis, we have chosen to optimize the time distribution of NOx on the basis of observations of O3, NO2 and NO.
 Twin experiments have proven the validity of the numerical models (especially the adjoint model of our chemistry-transport model, Polair3D, obtained by automatic differentiation). The impact of observational errors and of model errors have also been investigated through numerical tests.
 The application to one week of May 1998 has led to an optimized set of parameters. A verification test (by applying the optimized distribution to the next two weeks) has confirmed the improvement of the forecast skills.
 A brute force second-order sensitivity analysis has also been performed in order to check the robustness of the optimized parameters with respect to other uncertain parameters (first guess, meteorological conditions, Kz). The results indicate that the optimized time distribution is robust.
 Future work will be devoted to the application of such techniques at continental scale (with a focus on spatial distribution rather than time distribution). Another key point should also be to take into account model errors in the inverse modeling process. Many approaches are under investigation, ranging from combined inverse modeling of other parameters than emissions to weak formulations of the variational problem or Monte Carlo simulations of the inverse modeling procedure.
 The selection of the control parameters (the hourly coefficients for NO) is a potential limitation of the data assimilation procedure and future works should focus on an augmented set of control parameters.
 We thank the PREDIT program and the CETE institute for providing us the emission inventory for road traffic used in this paper. We also thank Rémy Lagache for fruitful discussions about emission inventories.