Investigation of indiscriminate nudging and predictability in a nested quasi-geostrophic model

Authors

  • Hiba Omrani,

    Corresponding author
    1. Institut Pierre Simon Laplace, Laboratoire de Météorologie Dynamique, École Polytechnique/ENS/UPMC/CNRS, Palaiseau, France
    • Institut Pierre Simon Laplace/Laboratoire de Météorologie Dynamique, Ecole Polytechnique, F-91128, Palaiseau CEDEX, France.
    Search for more papers by this author
  • Philippe Drobinski,

    1. Institut Pierre Simon Laplace, Laboratoire de Météorologie Dynamique, École Polytechnique/ENS/UPMC/CNRS, Palaiseau, France
    Search for more papers by this author
  • Thomas Dubos

    1. Institut Pierre Simon Laplace, Laboratoire de Météorologie Dynamique, École Polytechnique/ENS/UPMC/CNRS, Palaiseau, France
    Search for more papers by this author

Abstract

In this work, we consider the effect of indiscriminate nudging time on an idealized high-resolution global model (GM) and limited-area model (LAM) simulations. The model used is a two-layer quasi-geostrophic model on the beta-plane.

The effect of nudging is studied as a function of the predictability time, following a ‘Big Brother’ experimental approach: a high-resolution ‘global’ model is used to generate a ‘reference run’. These fields are filtered afterwards to remove small scales and provide the coarse-resolution fields which are used to drive the high-resolution GM and the LAM. Comparison of the reference fields and the high-resolution runs over the same region allows the estimation of the ability of the high-resolution GM and LAM to regenerate the removed small scales. This fully nonlinear set-up mimics the configuration used for regional high-resolution atmospheric modelling.

For the high-resolution GM, the results show that the behaviour of the nudged model depends primarily on the ratio of the nudging time to the predictability time. When the nudging time is very small compared to the predictability time, the model reproduces the large scale used to drive the model. On the other hand, if the nudging time is close to or larger than the predictability time, the nudging effect is weak and both large and small scales are poorly reproduced compared to the reference fields. The best result is obtained with a nudging time close to half the predictability time. This technique clearly improves the model capacity to reproduce the reference fields.

For the high-resolution LAM, our results show that for a sufficiently small domain the simulation is largely controlled by the lateral boundary conditions (LBCs) and is quasi-insensitive to nudging. However, if the domain size exceeds a few Rossby radii, the high-resolution LAM becomes sensitive to initial conditions and the control exerted by LBCs becomes insufficient to prevent a divergence from the driving fields. Although the reconstructed fine scales are significantly damped, they are surprisingly well correlated to their reference values in a deterministic sense, not a statistical sense. Copyright © 2011 Royal Meteorological Society

1. Introduction

Atmosphere is one of the most challenging geophysical systems to simulate because of the number of interacting components and the wide range of time and spatial scales of relevant processes and their complexity. Spatial scales also vary greatly, ranging from the micro scale of cloud droplets to the planetary scale of the atmospheric circulation. Numerical modelling constitutes a powerful approach to further our understanding of the mechanisms responsible for the maintenance of the atmospheric system's dynamic equilibrium and variability, and to probe its response to changes in its external forcing. To date, general circulation models (GCM), used to simulate climate or provide analyses or reanalyses of the atmosphere, resolve only the broader scales of atmospheric circulations (around 100 km grid resolution). Hence there is a need to develop tools for downscaling the large-scale fields to generate finer-scale description of regional weather and climate. The starting point of dynamical downscaling is typically a set of coarse-resolution large-scale fields which are either used to drive a high-resolution GCM or to provide the initial, and lateral and surface boundary conditions to a nested limited area model (LAM).

Both GCMs and LAMs are sensitive to the resolution and to the content of physical parametrizations, and require a spin-up. LAMs also present specific issues like sensitivity to the size of the domain of simulation, to the boundary conditions, and to the frequency of update of boundary conditions (Bhaskaran et al., 1996; Seth and Giorgi, 1998; Noguer et al., 1998; Denis et al., 2002a, 2003). For long-term simulations, Lo et al.(2008) showed that continuous runs can produce a very low score when the simulations are compared to observations and that simulations reinitialized periodically have better results than continuous runs. The practical solution to this issue is to explicitly disallow large and unrealistic departures between the coarse-resolution driving fields and the high-resolution fields by nudging the model towards the coarse-resolution driving fields. Nudging consists in adding to the conservation equations a Newtonian relaxation towards the driving fields.

Hence experience shows that, in order to reproduce a given weather history over a long period of time, a circulation model needs not only to be properly intialized but also to be nudged. Our basic hypothesis to explain this fact is that the need to nudge a high-resolution model results from its limited predictability. Indeed it is well known that atmospheric dynamics are sensitive to initial conditions (Thompson, 1957; Lorenz, 1963). Due to this inherent unpredictability of the atmosphere, small differences between a non-nudged model and the targeted state of the atmosphere amplify exponentially with time so that, after a finite time referred to as the predictability time, the model diverges from its target. In the specific case of a LAM it would seem that nudging is not necessary since the model is controlled by its lateral boundary conditions, which coincide with the targeted state of the atmosphere. The need to nudge would therefore indicate that it is not fully controlled by its lateral boundaries. In fact, since for a given model a specific simulation is entirely determined by the initial and boundary conditions, the amount of control exerted by the boundaries can be estimated by analysing the sensitivity of the model to initial conditions. Hence it appears that, for both GCMs and LAMs, the lack of predictability due to sensitivity to initial conditions is intimately related to the practice of nudging towards driving fields.

Indiscriminate and spectral nudging is increasingly implemented in numerical models (GCM and LAM) (e.g. Kuo and Williams, 1992a, 1992b; Miguez-Macho etal., 1992; von Storch et al., 2000; Biner et al., 2002; Genthon et al., 2002; Lo et al., 2008; Salameh et al., 2010). Indiscriminate nudging was originally developed for assimilation issues (Davies and Turner, 1977; Schraff, 1997; Li etal., 1998; Vidard et al., 2003) but has become increasingly popular to drive Regional Climate Models (RCMs). The term ‘indiscrimnate nudging’ is thus not as commonly used as ‘spectral nudging’. Salameh et al.(2010) use the term ‘indiscriminate nudging’, Anthes (1974), Hoke and Anthes (1976) and Stauffer and Seaman (1990) refer to ‘data assimilation’, and other synonymous terminologies exist such as ‘dynamical relaxation’ (Davies and Turner, 1977) or ‘grid’ or ‘analysis nudging’. In the following we will use ‘indiscriminate nudging’.

Both indiscriminate and spectral nudging require that some relaxation time constant be adjusted. Indiscriminate nudging has been widely used for testing purposes, sensitivity studies, assimilation, and mesoscale or boundary layer studies (e.g. Vidard et al., 2003; Lo et al., 2008; Salameh et al., 2010) and also for regional climate variability investigations (e.g. Genthon et al., 2002; Coindreau et al., 2007, with the global stretched grid regional model LMDZ; and Zhang et al., 2009, with the limited area model WRF). Nudging techniques have demonstrated their usefulness in simulating regional weather and climate, especially in regions where forcing due to complex orography, or coastlines, regulates the spatial distribution of atmospheric variables (Raluca Rad et al., 2008), especially orographic precipitation (Schraff, 1997; Tang et al., 2010) and regional-scale climate variability (Genthon et al., 2002; Coindreau et al., 2007).

Currently the relaxation time is chosen based on trial and error and a posteriori validation rather than a priori understanding leading to suitable values. One key issue of this work is to explore the possibility of relating a tunable parameter to physical processes. The relation between the predictability time-scale τp and the relaxation time-scale τ is therefore investigated in this article. To test this hypothesis we adopt an approach similar to Salameh etal.(2010). In that study, the impact of nudging on a high-resolution model was investigated using a toy model consisting of resolving a linear transport equation with a Newtonian relaxation term. The toy model suffers from the same drift phenomenon as a complex atmospheric model and needs to be nudged as well. Salameh et al.(2010) predict the existence of an optimal nudging time which depends on the time-scale over which numerical errors affect significantly the accuracy of the solution at large spatial scales, and the typical time-scale of small-scale phenomena that are not resolved in the coarse-resolution driving fields. However, since the toy model is linear, its drift is solely due to accumulating numerical errors and not to a genuine unpredictability. To overcome this limitation we base our analysis on a two-layer quasi-geostrophic (QG) model which presents more similarities to atmospheric dynamics. We use a ‘Big Brother’ (BB) experimental approach, where the true atmospheric state is known, unlike when RCMs are used in practice (Denis et al., 2002b). We address the relationship between nudging and predictability in two steps:

  • 1.We first consider the relationship between nudging and predictability by using a refined ‘global’ QG model as a high-resolution GCM. We especially investigate the ability of the high-resolution GCM to produce the correct small scales depending on the nudging time and predictability time.
  • 2.We investigate the additional effect on predictability and nudging of lateral boundaries using a limited-area QG model as an LAM. This study is complementary to the few studies investigating the predictability of LAMs (Anthes et al., 1985, 1989; Errico and Baumhefner, 1987; Van Tuyl and Errico, 1989; Vukicevic and Paegle, 1989; Vukicevic and Errico, 1990; De Elia et al., 2002). By using a strongly idealized model we are able to explore a wide parameter space in terms of regional domain size and of nudging time.

Indiscriminate nudging is available in many up-to-date regional numerical models, such as the limited-area models MM5 (Grell et al., 1995), WRF (Skamarock etal., 2005), Méso-NH (Lafore et al., 1998) and RAMS (Pielke et al., 1992), and the global stretched grid regional model LMDZ (Genthon et al., 2002; Coindreau et al., 2007). For these reasons, and also for simplicity, we focus here on indiscriminate nudging. Spectral nudging is the object of ongoing work.

This paper is organized as follows. A description of the ‘global’ and ‘limited-area’ two-layer QG models and predictability issues is given in section 2. Downscaling using a high-resolution GCM is investigated in section 3, and the results from downscaling using a high-resolution LAM version are presented and discussed in section 4. Section 5 summarizes the results and points out some open research questions needing further investigation.

2. The quasi-geostrophic model

2.1. Equations

We use the flat-bottom two-layer quasi-geostrophic (QG) model on a β-plane derived by Haidvogel and Held (1980), modifying it only to implement a limited-area version, and include nudging terms. For completeness we reproduce in this subsection the derivation by Haidvogel and Held (1980). The dimensional form of the equations of motion can be written:

equation image(1)
equation image(2)

where the subscripts 1 and 2 refer to the upper and lower layers of the model, respectively. The quantities Ψi and Qi are the stream function and potential vorticity (PV) for layer i, J is the horizontal Jacobian operator Ji,Qi) = (xΨiyQiyΨixQi) and ∇2 is the horizontal Laplacian operator equation image. The two layers have the same depth H at rest. The quantity ν is a numerical diffusion preventing the build-up of enstrophy in high wave numbers and κ is a surface friction term. The wind components (ui,vi) are related to the stream function through the diagnostic relations

equation image(3)
equation image(4)
equation image(5)

In these equations, equation image is the Rossby radius, equation image is the reduced gravity, θ and θ0 are the potential temperature and the reference potential temperature, respectively, and f0 is the Coriolis parameter. The upward displacement η of the interface between the two layers is given by equation image.

Equations (3) and (4) state that upper and lower layer PV are conserved following the horizontal flow, except for the effects of dissipative processes. These latter processes are assumed to act on the relative vorticity (ΔΨi, i = 1,2) through a biharmonic lateral diffusion in layers 1 and 2 and a linear surface drag in layer 2 only with turbulent mixing ratio coefficients ν and κ, respectively. Following Haidvogel and Held (1980), we consider the horizontally uniform time-averaged temperature gradient (directed north–south) and zonal vertical shear. The mean velocity is confined to the upper layer, so that equation image and Ū1 = U, with equation image the mean zonal and meridional wind components, respectively:

equation image(6)
equation image(7)

where ψi (i = 1,2) is the deviation of the stream function from its time average, i.e. equation image. Similar notation is used for the other variables (e.g. the potential vorticity).

Non-dimensionalizing (x, y, t, ψ) by (Rd, Rd, equation image, URd) (x and y are the zonal and meridional coordinates), the QG PV equations for the transient flow become

equation image(8)
equation image(9)

where the eddy potential vorticities are

equation image(10)
equation image(11)

The forcing terms

equation image(12)
equation image(13)

represent the effects of the mean temperature and planetary vorticity gradients on the transient flow. All variables in Eqs (10)–(13) are non-dimensional. The parameters which appear in these equations are equation image, equation image and equation image. In the following, for sake of simplicity, the hats of non-dimensional variables will be omitted.

2.2. Numerical implementation of the global and limited area QG models

The temporal integration of the two-layer QG model is based on second-order spatial finite differences and a third-order Runge–Kutta explicit temporal scheme. With finite differences, it is easy to apply the same discretization in the global and limited-area model. The main tasks to be performed during a time step are the computation of the PV trends tqi given ψi and the inversion of the PV, e.g. the computation of the stream functions ψi given the PV fields qi.

Computation of the PV trend involves discrete approximations of the Jacobian and Laplacian operators. For the Jacobian we use Arakawa's Jacobian, which preserves energy and potential enstrophy at the discrete level, preventing spurious energy transfers to small scales (Arakawa, 1966). The Laplacian is approximated using the standard five-point stencil. The iterated Laplacian is approximated by iterating this approximate operator.

In this study, we adopt the BB experimental approach (Denis et al., 2002b). The first step consists of running a global high-resolution BB model to produce a high-resolution reference dataset (equation image, i = 1,2). Then, the small scales existing in that reference dataset are filtered out to generate a low-resolution dataset (equation image, i = 1,2). The filtering technique consists in applying a two-dimensional Fourier filter to equation image and the ratio between the horizontal resolutions of equation image to equation image is hereafter referred to as α. The equation image fields can be seen as analyses, reanalyses or coarse-resolution GCM outputs. The equation image fields are used to initialize and drive another instance of the QG model (‘Little Brother’) running at the same resolution as the BB. This mimics the driving of a high-resolution GCM or LAM by coarse-resolution atmospheric fields. We will later refer to the high-resolution GCM or LAM as ‘Little Brother’ (LB). The BB reference dataset (before filtering) equation image contains the small scales against which the LB small scales are then validated. This experimental framework is set up to evaluate the ability of the LB to accurately reproduce the fine-scale features present in the BB reference simulation.

Boundary conditions are periodic in the ‘global’ QG model. In the LAM, evaluating the Jacobian and iterated Laplacian requires values of ψi located in a so-called halo around the computational domain. These values are given by the ‘analyses' equation image. Inverting the PV means solving the linear system

equation image(14)

where L is the second-order finite-difference operator approximating the Laplacian. In the global model, system (14) is solved by performing a forward discrete Fourier transform, solving for each Fourier mode a 2 × 2 linear system, and performing a backward discrete Fourier transform.

In the limited-area model, system (14) is supplemented by the Dirichlet boundary conditions equation image. This enforces the continuity of the pressure field across the domain boundary, as physically required. We solve for the deviation equation image. The right-hand side term of system (14) then becomes equation image, where equation image is the PV computed from equation image. The deviation δψi satisfies the boundary condition δψi = 0. Therefore system (14) is solved by performing a forward discrete sine transform, solving for each Fourier mode a 2 × 2 linear problem, and performing a backward discrete sine transform. Finally, equation image gives the desired ψi. Additionally, a relaxation over 6 points is applied at the boundaries, forming a Davies-type lateral sponge zone (we also performed the whole study without using a sponge zone, without noticeable change to the results; not shown). Figure 1 shows an example of a horizontal cross-section of the PV field in layer 1 of the model at long time range (t > 20 in non-dimensional units). It evidences the presence of anticyclones and cyclones of typical size equal to a few units (i.e. a few Rossby radii in dimensional form).

Figure 1.

Horizontal cross-section of potential vorticity in layer 1 (q1).

2.3. Nudged version of the QG model

As we analyse later in more detail, the simulated fields ψi deviate rapidly from the reanalyses equation image if the latter are used only to prescribe boundary conditions. In order to prevent this drift, we use the nudging technique (or Newtonian relaxation) developed for assimilation purposes (Davies and Turner, 1977; Schraff, 1997; Yong et al., 1998; Vidard et al., 2003) and commonly available for dynamical downscaling purposes. The nudging technique consists of relaxing the model state towards the analyses by adding a non-physical term to the model equation. This nudging term is defined as the difference between the observation and the model solution, weighted by a nudging coefficient which is the inverse of the nudging time. After addition of the nudging term, Eqs (8) and (9)become

equation image(15)
equation image(16)

where the nudging time τ is a freely tunable parameter. The shorter the time τ, the closer qi and ψi will be to equation image and equation image (i = 1,2), and hence the less accurate the small scales of qi will be.

To quantify the predictability time-scale τp in the QG models, we compute the initial exponential error growth, yielding the first Lyapunov exponent. We consider a run equation image and a second run called the perturbated simulation equation image, which is almost identical except that its initial condition is different by a random and infinitesimal amount (10−3 amplitude Gaussian white noise). We compute the total energy of the difference between perturbed and reference stream functions:

equation image(17)

where equation image.

The total energy E consists of the sum of upper and lower kinetic energies (first term of Eq. (17)) and potential energies (second term of Eq. (17)). Figure 2 displays a typical evolution of E as a function of time in log-scale along the ordinate axis. It clearly shows evidence of an exponential initial error growth (linear evolution in log-scale) until t = 20 in non-dimensional coordinates. Then there is nonlinear saturation. The exponential phase of error growth E(t) = E0 exp(2λt) defines the first Lyapunov exponent λ and the predictability time-scale equation image.

Figure 2.

Evolution of total energy E of the difference between perturbed and reference stream functions as a function of normalized time t for β = 0.25 and κ = 0.5 (non-dimensional values).

Figure 3 shows the value of τ as a function β and κ. As also observed by Vallis (1983), the predictability properties of two-layer flow are rather subtly affected by β. Vallis (1983) argues that the energy cascade at low wavenumbers is slowed by a strong β, which increases predictability, whereas it weakly depends on κ. Similarly to Haidvogel and Held (1980), we set κ = 0.5 in the following and let β vary between 0.1 and 0.55.

Figure 3.

Contour plot of predictability time τp as a function of β and κ. The contours range between 0.14 and 0.34 with an increment of 0.2.

3. Downscaling using a high-resolution GCM

In this section, we use the same periodic domain for BB and LB (Figure 4). One must note that downscaling by nugding a high-resolution GCM towards a coarse-resolution GCM is of no practical use. The motivation is here to establish the relationship between the sensitivity to initial conditions and the nudging time in a context which is free of the technicalities associated with a LAM, which will be addressed in section 4.

Figure 4.

‘Big Brother’ experiment principle using a high-resolution GCM.

To quantify the ability of the downscaled LB field qi to reproduce the BB reference field equation image in layer i, we first evaluate the variance ratio of LB to BB solutions equation image. This is a classical diagnostic for climate model evaluation.

A second approach, which corresponds to deterministic evaluation, consists of computing their normalized covariance ai given by

equation image(18)

which represents the slope of the linear regression between qi and equation image (i.e. equation image), and the correlation coefficient ri given by

equation image(19)

with

equation image(20)
equation image(21)
equation image(22)

where equation image and equation image are the values of equation image and qi averaged over the whole model domain in layer i. The quantities Nx, Ny are the number of grid points of the domain in the x (longitude) and y (latitude) directions. The quantities ai and ri represent the slope and spread of the scatter-plot between equation image and qi. When ai and ri are close to 1, the LB reproduces accurately at each time step and each grid point the BB reference field in layer i. Conversely, a large departure from 1 indicates poor LB performance. Therefore, a crucial aspect of the estimation of the LB performance in simulating the fine-scale features in the context of regional weather and climate modelling is the deterministic grid point to grid point comparison between the LB outputs qi and the BB reference field equation image. These skill scores are much more constraining than a comparison of climatological statistical diagnostics (Murphy and Epstein, 1989).

In the following, equation image. The domain size is 24Rd × 24Rd and the number of grid points is 128 × 128. This implies that one Rossby radius is made of 5.3 grid points. We run the LB model until equation image with different nudging time τ ranging between 0.01τp and τp and a resolution factor α between 1/2 and 1/8. Figure 5 shows horizontal cross-sections in layer 1 at the end of the simulation of the reference (BB) PV equation image, of the coarse-resolution driving PV equation image obtained by spatially filtering equation image with α = 1/3 and of the simulated (LB) PV q1 for τ = 0.01τp, τ = 0.4τp and τ = τp, respectively.

Figure 5.

Horizontal cross-section of the reference (BB) PV in layer 1 (equation image) (a, b, c), horizontal cross-section of the coarse-resolution driving PV in layer 1 obtained by filtering spatially the reference PV field (equation image) (d, e, f) and horizontal cross-section of simulated (LB) PV in layer 1 (q1) (g, h, i) for τ = 0.01τp (a, d, g), τ = 0.4τp (b, e, h) and τ = τp (c, f, i). The ratio α between the horizontal resolutions of equation image to equation image is 1/3.

A small value of the nudging time (τ = 0.01τp) forces the model to stick to the coarse-resolution driving fields: indeed, comparing Figure 5(g) (q1) to Figure 5(d) (equation image) and Figure 5(d) (q1) to Figure 5(a) (equation image), we can observe that the model reproduces perfectly the large-scale vortices but not the fine-scale structures. On the other hand, for τ = τp (Figure 5(c, f, i)), the model is able to reproduce neither the large-scale nor the fine-scale features. The nudging time that corresponds to τ = 0.4τp seems visually to be the optimum time since the model (Figure 5(h)) best agrees with the reference (Figure 5(b)).

In order to evaluate more quantitatively the quality of the simulations of the fine- and large-scale features, the LB PV fields qi in the simulations are decomposed into a large-scale part (qi,ls and equation image) and a small-scale part (qi,ss and equation image) by application of low-pass and high-pass Fourier filters, with cut-off wavelength being the resolution of the field equation image driving the simulation.

Figure 6 displays the variance ratio of LB to BB solutions averaged over 80 ≤ t ≤ 100, as a function of the nudging time for the total field, large-scale field and small-scale field.

Figure 6.

LB to BB ratio of averaged layer 1 PV variance, as a function of the nudging time τ normalized by τp (log scale) for the total field (a), large-scale field (b) and small-scale field (c). The different curves correspond to different resolution ratio α.

We note that for τ between 0 and 0.5τp , the amount of small scales increases until it reaches a maximum. It then decreases for τ between 0.5τp and 6τp down to a value of 0.2, and finally increases again up to about a value of 1. For small nudging time, the production of small-scale features is inhibited because by construction the model is forced to stick to the driving fields. Intuitively, we expect the production of the fine-scale features to be associated with increasing ratio of LB to BB solutions until it reaches 1 for large nudging time. This is not the case for τ between 0.5 and 6τp, where the ratio of LB to BB solutions decreases until it reaches a minimum. In fact, a similar behaviour is observed for the large-scale field. Hence nudging for this range of τ hinders the production of large-scale features too. To check the robustness of this result, a similar test was performed with the Lorenz model, which is much simpler than the QG model but still presents a chaotic character. The calculations showed a strongly reduced variance of the nudged model as well (not shown). The small variance of the small-scale BB flow compared to LB is probably the result of the weak large-scale LB flow. Finally, as expected, for very large values of τ, nudging no longer has any effect, and both the LB small- and large-scale fields have the same variance as in BB. One can note that the value of the ratio of LB to BB solutions slightly exceeds 1 for large τ . This is due to the uncertainty associated with the variance estimate for the two different fields. In the following we restrict the discussion to values of τ ranging between 0 and τp.

Figure 7 displays scatter-plots between the simulated (LB) and reference (BB) PV fields for 80 ≤ t ≤ 100 in layer 1 for the large-scale (q1,ls and equation image) and for the small-scale (q1,ss and equation image) for τ = 0.01τp, τ = 0.4τp and τ = τp.

Figure 7.

Scatter-plots between the simulated (LB) and reference (BB) PV fields in layer 1 for the large-scale (q1,ls and equation image) and for the small-scale (q1,ss and equation image) for τ = 0.01τp (a, d), τ = 0.4τp (b, e) and τ = τp (c, f), respectively. The ratio α between the horizontal resolutions of equation image to equation image is 1/3.

For a small nudging time τ = 0.01τp (Figure 7(a)), the LB large scale is accurately reproduced compared to the reference (BB) and the covariance and correlation coefficients a1,ls and r1,ls are equation image. However, the LB small-scale features are very poorly reproduced as quantified with covariance and correlation coefficients a1,ss = 0.13 and r1,ss = 0.56 (Figure 7(d)). When τ is large, i.e. τ = τp, the error on the large scale increases significantly with a1,ls = 0.53 and r1,ls = 0.67 (Figure 7(c)) and induces large errors at the fine scale with a1,ss = 0.01 and r1,ss = 0.02 (Figure 7(f)). The use of the intermediate value of the nudging time τ = 0.4τp allows the minimization of the error both at the small and large scales (Figure 7(b, e)). The covariance and correlation coefficients for the large scale a1,ls and r1,ls are equation image and for the small scale a1,ss = 0.76 and r1,ss = 0.96.

Figure 8 displays the covariance (a1) and correlation (r1) coefficients computed in layer 1 for the small (ss subscript) and the large scale (ls subscript) as a function of the nudging time normalized by the predictability time (τ/τp) using various resolution factors α.

Figure 8.

Covariance (a1) (a, c) and correlation (r1) (b, d) coefficients computed in layer 1 for the large (ls subscript, a, b) and small scales (ss subscript, c, d) as a function of the nudging time normalized by the predictability time (τ/τp) using various resolution factors α.

The figure shows that for the large scale and for low to intermediate values of τ (from 0 to about 0.5τp), the covariance and correlation coefficients a1,ls and r1,ls remain very close to 1 because the relaxation is strong enough to prevent LB simulations (qi) departing significantly from the driving large-scale fields (equation image) (Figure 8(a, b)). However, one can notice that for τ < 0.1τp the small-scale field is very poorly simulated by LB and a1,ss and r1,ss reach maximum values between 0.1 and 0.5 (Figure 8(c, d)). Indeed, for such low values of τ the production of small-scale structures is inhibited because the LB fields are constrained by the nudging to match too tightly the coarse-resolution driving fields. As the nudging time increases (τ > 0.5τp), the model progressively deviates from the forcing large-scale fields. The LB does produce small-scale patterns, but they poorly match those present in the BB reference fields (Figure 5(i)). Indeed, a1,ls and r1,ls decrease below 0.6 and 0.8 respectively for the large scales and a1,ss and r1,ss tend to zero for the small scales. The departure of the LB large-scale fields from the coarse-resolution driving fields induces small-scale patterns that may be statistically representative of the mean regional climate (for τ/τp > 10; see Figure 6) but differ from the reference on a ‘grid point to grid point comparison’ basis. An optimum is eventually reached for intermediate values of τ, ranging between 0.3 and 0.6τp. Figure 8 also shows that LB simulations deteriorate when the resolution factor α decreases, i.e. the ratio between the resolution of the driving large-scale fields and the LB resolution increases. This deterioration is due to the too-coarse resolution the driving fields reach, which prevents the accurate representation of even the large-scale atmospheric circulation. This is especially critical when the driving field resolution becomes coarser than the Rossby deformation radius, corresponding roughly to α < 1/8. Quantitatively, for τ = 0.4τp, a1,ss and r1,ss decrease from 0.73 and 0.95 for α = 1/2 to 0.23 and 0.50 for α = 1/8.

4. Downscaling using a high-resolution LAM

In this section, the main difference from section 3 is that we consider a limited-area model nested within the global model, so that the size of the LAM domain L comes into play. The experimental framework of a perfect model used in this section is similar to that of the previous section but with a small modification (Figure 9): the first step consists of running the global high-resolution QG model, referred as ‘Big Brother’ (BB), to produce a high-resolution reference dataset (equation image, i = 1,2). Then, the small scales existing in that reference dataset are filtered out to generate a low-resolution dataset (equation image, i = 1,2) needed to drive the nested LAM. The equation image fields are used as initial and boundary conditions of the LAM and to drive the LAM when nudging is used. The reference dataset (before filtering) equation image contains the small scales against which the LAM small scales in the nested domain (qi, i = 1,2) are then validated. The performance of the LAM is quantified by the same parameters (ai, ri) as in the previous section, but in the nested domain only. We will later refer to the LAM as the ‘Little Brother’ (LB).

Figure 9.

‘Big Brother’ experiment approach using a LAM.

The effect of the LB domain size on the predictability is first investigated. For this the LAM is driven at its boundaries by the perfect data equation image and initialized with perturbed data as described in section 3.2. No nudging is used. In Table I, the predictability time τp in normalized coordinate is reported as a function of the normalized domain size L/Rd.

Table I. Predictability time τp as a function of the normalized domain size L/Rd.
Nx × Ny64 × 6480 × 8096 × 96112 × 112
equation image12151821
τp18151310

We first observe that a finite predictability time is found for all the domain sizes we consider. If the simulation was completely controlled by its boundaries, the initial discrepancy between the reference and the perturbed simulations would decay as time passes and eventually vanish. Conversely, the finite predictability time of the LB implies that the control exerted by the boundary conditions is incomplete, in the sense that the trajectory followed by the model depends significantly on its initial condition, and not only on the information provided at the boundaries. We must, however, make clear here that the slower growth of initial errors in a small-domain LAM results from the artificial constraints exerted by the lateral boundary conditions, and that it does not reflect a greater intrinsic predictability of the modelled atmosphere. However, for consistency with the previous section we keep referring to predictability when discussing the sensitivity to initial conditions.

The predictability decreases as the domain size of the LB increases. We interpret this dependency qualitatively in two ways. First, the characteristic size of a potential vorticity cyclone or anticyclone is comparable to the Rossby radius Rd. Then, as the domain size decreases, fewer cyclones and anticyclones fit into it. Therefore fewer interactions take place between the different cyclones and anticyclones, and predictability of the model increases. Second, as the domain size L decreases, an individual air parcel swept by the mean wind U spends a decreasing amount of time τadv = L/U in the domain. Assuming that the errors exit the domain as well, a small domain leaves them less time to develop than a large domain. Small domains can therefore be expected to lead to a more predictable system. Indeed, Vukicevic and Paegle (1989) show that if the domain is small enough the sensitivity of the forecast to small initial uncertainties is low. Similarly, Leduc and Laprise (2009) analysed the sensitivity of regional climate modelling to the domain size and showed that the small-scale stationary patterns improve in spatial correlation when the domain size is reduced. Conversely, Vannitsem and Chomé (2005) show that for a large domain a small error in initial condition leads to different simulations. The study of Alexandru et al.(2007) suggests that a reduction of the domain size generally results in a significant reduction of LAM internal variability. Finally, Lucas-Picher et al.(2008) have computed the time spent by air parcels in the domain of an RCM. They find a linear relation between the spatial distribution of the internal variability and residency time. These previous results are fully consistent with a small domain having a larger predictability, and our postulated relationship between predictability and internal variability. Nutter et al.(2004) also show that the loss of dispersion within an ensemble of simulations is greater on smaller domains because features are advected more quickly from one side to the other.

Moreover, we have observed a complete suppression of sensitivity to initial conditions with a domain of size L ≤ 12Rd (i.e. smaller than 64 × 64 grid points). In an idealized set-up similar to ours, Nutter et al.(2004) address sensitivity to initial conditions by considering the internal variability within an ensemble of limited-area one-layer QG simulations on the β-plane. They also find that internal variability is smaller for small domains, with significant reduction for an LAM domain of size 1500 km (i.e. one Rd). Note that, owing to their use of a barotropic model instead of a baroclinic model like ours, we do not expect that the LAM domain size below which sensitivity to initial conditions disappears is the same.

We now consider nudging in a small domain of size L = 12Rd for the LB, in a global domain of 24Rd. The value of the predictability time τp is then 18 (Table I). Figure 10 displays the covariance and correlation coefficients computed in layer 1. The use of only 64 × 64 grid points makes the estimations of the covariance and correlation coefficients much noisier, especially for α < 1/4. Nevertheless, one can see that, except for very low values of τ (< 0.1τp), the values of the covariance and correlation coefficients are weakly dependent on the nudging time τ, so the quality and performance of the LB simulations do not depend on the strength of relaxation towards the coarse-resolution driving fields.

Figure 10.

Covariance (a1) (a, c) and correlation (r1) (b, d) coefficients computed in layer 1 for the smallest LB domain–Nx × Ny = 64 × 64 i.e. L = 12Rd (Table I)–for the large (ls subscript, a, b) and small scales (ss subscript, c, d) as a function of the nudging time normalized by the predictability time (τ/τp) using various resolution factors α.

We finally consider a nudging in a larger domain of size L = 18Rd for the LB, in a global domain of 24Rd. The value of the predictability time τp is then 13 (Table I). Figure 11 displays the covariance and correlation coefficients computed in layer 1. The use of 96 × 96 grid points allows a more accurate estimate of the covariance and correlation coefficients, even though it is still noisier than with 128 × 128 grid points. The shape of the curves as a function τp is similar to that obtained with the high-resolution GCM, with a performance optimum for equation image, but with slightly more spread between the various runs for different values of α (Figure 11). We see that, compared to the high-resolution GCM, the large scale is slightly less accurately reproduced. Also, the degradation of the performance with respect to α is larger, and for α < 1/4 the small degradation of the LAM performance at a large scale does not allow the production of correct fine-scale features.

Figure 11.

Same as Figure 10 for the LB domain–Nx × Ny = 96 × 96 i.e. L = 18Rd (Table I).

5. Discussion

When performing a dynamical downscaling experiment, one expects to produce data with two distinct improvements over the large-scale information. The first expectation is that the downscaled data will benefit from better-resolved forcings like orography and surface fluxes, which depend on spatially variable soil properties. The second expectation is that, even with orography and other boundary conditions unchanged, a simulation with higher resolution will produce smaller-scale eddies explicitly and take into account more accurately their contribution to the regional-scale averages and variability. Whether these expectations are realized can be investigated following the BB experimental approach, which has been used mostly with complex, realistic models and rarely with idealized models. With our idealized methodology, we are able to address the second expectation independently from the first one. Furthermore, using an idealized model which nevertheless allows a good representation of the atmospheric dynamics driven by the baroclinic instability ensures that the results we obtain are a consequence of the resolved dynamics and not of some specific physical parameterization.

Since climate models forget rapidly about their initial conditions, it would seem that the concept of predictability is relevant to short-term weather forecasting but not to long-term regional climate modelling. This will be the case if regional climate is studied using a standalone global model, but regional climate modelling is mostly done using LAMs. A LAM must maintain a consistency between the atmosphere it models within its domain and the fields that drive it at the boundary. This will not happen if the LAM is sensitive to initial conditions, since this implies that it is insufficiently controlled by its lateral boundaries. Our results show that LAMs with a domain larger than a few Rossby radii are sensitive to initial conditions even for perfect lateral boundary conditions, and therefore must be nudged in order to maintain their consistency with boundary data. Furthermore, the behaviour of the nudged model, both at large small scales, depends primarily on the ratio of the nudging time to the predictability time.

When the nudging time is very small compared to the predictability time, the model reproduces the large scale used to force the model. On the other hand, if the nudging time is close to or larger than the predictability time, both large and fine scale are poorly reproduced compared to the reference fields. As a result, the internal variability of the model strongly depends on nudging. This technique clearly improves the model's capacity to reproduce the reference fields, used here as a surrogate for reality. The best result is obtained with a nudging time close to half the predictability time (τ = 0.4τp). Although the reconstructed fine scales are significantly damped, they are surprisingly well correlated to their reference values in a deterministic sense, not a statistical sense.

For LAM our predictability study shows that if the domain size increases the predictability of the system decreases. We speculate that when the LAM domain is sufficiently large the atmospheric system has more active parts in interaction, which increases its chaotic character and limits its predictability. Conversely, when the domain is small, the short time spent by air parcels in the domain does not allow initially present errors to develop. With nudging, the LAM capacity to reconstruct the small scales is similar to that of a high-resolution GCM. In particular, a good correlation between the reconstructed small scales and the reference requires an increase in resolution equation image less than 2 or possibly 3 (Figures 8 and 10). Even if deterministic scores are not important, statistics such as the variance are strongly distorted in our experiments when equation image. Current nested regional climate models frequently employ grid meshes almost an order of magnitude finer than the GCM serving to drive them. In such a case, if the LAM domain is large and nudging cannot be avoided, indiscriminate nudging will have a strong detrimental effect on the modelled small scales. Scale-selective nudging like spectral nudging might then be a requirement rather than an interesting option.

To summarize, we have evaluated the ability of the dynamical downscaling framework to reconstruct the small scales of the dynamics given its large scales. We find that for a moderate ratio of resolved scales, a large to global model domain and an adequate nudging time, the small scales of the downscaled field achieve a substantial correlation with the small scales of the reference field. For a small LAM domain, the boundary conditions sufficiently control the atmospheric dynamics and low sensitivity is found on the nudging time. In this work we focus on indiscriminate nudging. Spectral nudging is the object of ongoing work. We must stress that we use both statistic and deterministic skill scores to evaluate the RCM. For climate modelling it is the statistics of the simulations that matters, not the deterministic skill. However, we see that in the Coordinated Downscaling Experiment (CORDEX) programme of the World Climate Research Program (WCRP) (Giorgi et al., 2009) a first phase is the downscaling of meteorological reanalyses over 20 years (1989–2008) using RCMs. One key aspect of this phase is the evaluation of the RCMs. In this context, verifying analysis on a time-by-time, point-by-point basis with gridded observations from satellite or reanalyses also makes sense, so the two ways of evaluating RCMs should be used adequately in a complementary way.

Finally, it must be kept in mind that the simple nature of the QG model does not allow the results to be transposed to real regional modelling. More work has to be conducted with RCM integrating the full complexity of the atmospheric processes, but this is left for the future.

Acknowledgements

We are thankful to F. Codron and R. Laprise for fruitful discussion and the two referees who helped to improve the manuscript significantly. This research has received funding from the ANR-MEDUP project, GIS ‘Climat-Environnement-Société’ MORCE-MED project, and through ADEME (Agence de l’Environnement et de la Maîtrise de l’Energie) contract 0705C0038.

Ancillary