In order to investigate questions concerning changes in the atmosphere and other components of the Earth system, it is desirable to have a seamless modelling system that operates across a wide range of time-scales and horizontal resolutions. The development of a low-resolution configuration of the model would greatly assist in addressing these scientific questions whose long time-scales or complexity are computationally demanding—a cost that cannot be sustained by higher resolutions. However, in order to be confident that results remain robust to changes in resolution, it is essential that such models include the same underlying fundamental processes as those at higher resolution.
Some resolution-dependent biases are found to grow (notably when horizontal resolution decreases below ∼150 km) as a result of the lack of transient eddy kinetic energy (TEKE). The advection scheme used by many of the current general circulation models follows the semi-Lagrangian (SL) dynamics approach. Because of the need for interpolation to the departure point, the scheme is highly diffusive when low-order interpolation schemes are used. We analyse how the impact of the high diffusivity of SL schemes leads to a lack of TEKE, which inhibits the development of mid-latitude variability phenomena such as synoptic cyclones or blocking events. Conversely, some examples where the performance of the low resolution is acceptable are provided.
General circulation models (GCMs) are being used to address a wide variety of scientific questions related to changes in the atmospheric and Earth system. The desire to explicitly resolve as many processes as possible drives increases in model resolution for flagship simulations. However, for some applications, such as palaeoclimate studies or ensembles of GCM simulations coupled to other Earth subsystems like biosphere, chemistry or aerosols, the computational cost of running at these high resolutions is prohibitive.
There has been an increasing interest in the development of a hierarchy of models under the same scientific framework (e.g. Held, 2005). Model versions within the hierarchy are developed with different aims; simple models such as energy balance, aquaplanet or ‘dynamical core only’ are developed to study particular climate or meteorological phenomena in a simplified environment; full GCMs with different resolutions and levels of complexity are used for climate prediction studies under different assumptions (e.g. HadGEM2 Development Team, 2011); and high resolution numerical weather and climate prediction models are designed to produce accurate advice for policy makers and public bodies. For the advice from one model member of the hierarchy to be as robust as possible, it is important that we strive to minimize the impact that changes in resolution and complexity have on results from that member. Hence a key goal in the development of such a model hierarchy with a single physical modelling framework is to build models that are ‘traceable’ to each other. We define this to mean that all models within the hierarchy can be shown to represent the same fundamental processes that control the climate and future climate changes. For lower-resolution models, biases may emerge from poorly resolved driving mechanisms in dynamics or physics (i.e. high diffusivity destroying small-scale eddies) which vanish once the resolution is high enough to surpass the ‘physical traceability barrier’, the threshold where the resolution is high enough to accurately represent these processes. To retain traceability, this may then require targeted changes such as those described in this paper (the use of a higher-order interpolation to the departure point and the vorticity confinement), which aim to improve the mid-latitude variability of coarser-resolution models relative to higher resolutions. A guiding principle within our definition of the hierarchy is that changes are only made as a consequence of resolution/time-scale affecting the representation of fundamental processes, not on the basis of climate performance (see also Senior et al., 2011; Brown et al., pers. comm., 2012).
A new generation of model hierarchy is under development at the Met Office. The Met Office Unified Model (MetUM) using the GA3.0 configuration (Walters et al., 2011) has been developed as a predictive tool for weather to climate time-scales (Arribas et al., 2011; Hewitt et al., 2011; Walters et al., 2011). Three model versions with different horizontal resolutions have been built for climate applications; high-resolution N216 (∼60 km in the mid latitudes) addresses seasonal to decadal scientific problems; the middle-resolution N96 (∼135 km) is useful for ensemble and centennial studies; and the low-resolution N48 (∼270 km) is designed to investigate Earth system and palaeoclimate topics. MetUM is a grid-point model whose horizontal resolution is denoted by N, half the number of east–west grid points, which roughly determines the number of nodes of the shortest wave on a longitudinal circumference, thus allowing approximate comparison with the truncation scale. In order to make the grid box isotropic in the mid latitudes, the number of grid points North–South is 3N/2 + 1. Future development of the MetUM climate hierarchy is likely to include even higher resolutions.
Like many GCMs, MetUM uses a semi-Lagrangian (SL) and semi-implicit advection scheme (Davies et al., 2005) within its dynamical core. Despite the important benefits of SL dynamics in the representation of the advection plus the fact that the CFL criterion is no longer a constraint on stability, several studies have shown that low-resolution models with SL dynamics tend to have less transient eddy kinetic energy (TEKE) than ones with Eulerian advection, as a result of the high dissipation of kinetic energy due to the interpolation to the departure points (Chen et al., 1997; Stratton, 2004). Higher-order interpolation schemes increase the TEKE of the dynamical core. When resolution increases up to N96, TEKE is closer to convergence and diffusivity is much less strongly dependent on the horizontal resolution, although it is still quite sensitive to the interpolation scheme used (see Figure 2 in Stratton, 2004).
The lack of TEKE has a profound impact on the representation of mid-latitude variability, which encompasses events such as cyclones and blocks—vital components of the general circulation of the atmosphere and dominant in determining the local weather in the mid latitudes.
Mid-latitude cyclones transport heat, momentum and water vapour horizontally and vertically, thereby influencing the net poleward transport of energy. Also the mobile pressure systems which together make up the storm tracks are a major influence on the mid-latitude weather and climate, since these control short-term precipitation, cloudiness and radiation (Bengtsson et al., 2006; Greeves, 2006). With climate change possibly affecting the storm tracks (Meehl et al., 2007), climate models capable of accurately representing the storm tracks are needed.
In this study, we investigate the representation of some of the fundamental processes in the lowest-resolution (N48) model within the MetUM model hierarchy, In particular, we look into the major setback of the low-resolution model, the high diffusivity caused by the interpolation to the departure point of the SL scheme, and determine how this problem affects the representation of mid-latitude phenomena. Thus we detail the impact of horizontal resolution on the TEKE of the model across the model hierarchy, together with its effect on mid-latitude variability. Despite the lack of TEKE and consequent impact on the misrepresentation of dynamical features in the low-resolution simulation, some examples of processes, such as clouds and precipitation, that are less sensitive to resolution are provided.
In order to compensate for the high diffusivity of the low-resolution model, we test both a higher-order interpolation scheme and a vorticity confinement (VC) scheme. The latter is a parametrization designed to add momentum tangential to the relative vorticity contours, thus offsetting the diffusion of vorticity features such as fronts or cyclones. Following a process-based approach we make use of different metrics to describe and quantify how the extra input of TEKE added by these solutions triggers the development of mid-latitude systems such as blocks and cyclones, and ultimately improves the poleward transport.
A brief description of the SL advection scheme as well as the different interpolation methods used is detailed in section 2; the vorticity confinement scheme is explained in section 3; a brief explanation of the different techniques to quantify the mid-latitude variability and an explanation of the methodology employed are provided in section 4. A preliminary study of the effects of VC on an idealized case is described in section 5. The main results of the impacts of the low-resolution high diffusivity and the usefulness of the proposed solutions on energetics, synoptic systems, blocking and poleward heat transport are reported in section 6. The possible uses of a traceable low-resolution version are discussed in section 7. Finally, the concluding remarks of the study are presented in section 8.
2. Semi-Lagrangian advection method
Semi-Lagrangian advection is based on the interpolation of fields from a departure point most often using a backward Lagrangian trajectory. Consider the first-order prognostic equation of a scalar field F with a source term Ψ:
This equation may be integrated between times tn = nΔt and tn+1 = tn + Δt following the parcel of air that arrives at grid point xa at time tn+1. The grid point xa is called the arrival point. The location of the parcel at time tn is represented by xd and called the departure point of the parcel, which is generally not a grid point (see Figure 1). The change in F between time tn and tn+1 is simply the integral of Ψ along its trajectory over the relevant time interval:
where Fn+1 ≡ F(xa,tn+1) and .
Equation (1) is an exact integral of Eq. (2); it involves no truncation error. In practice, errors are inevitably introduced via the estimation of the departure point xd, estimation of departure point value and estimation of the trajectory time average Ψ. These estimates require interpolation and representation (but no differentiation). To obtain accurate results from an SL integration scheme it is necessary to choose the order of interpolation carefully: interpolation using higher-degree polynomials is more accurate and gives much less damping; on the other hand, it has an additional computational cost.
Linear interpolation is adequate for the terms used in the evaluation of the trajectory, but more accurate interpolation is essential for the terms evaluated at the departure point (Staniforth and Côté, 1991). Cubic interpolation in three dimensions is expensive, and fortunately a quasi-cubic interpolation was found to give essentially equivalent results for the ECMWF weather forecast model at T213, equivalent to N140 (Ritchie et al., 1994). Quintic is the highest-order (5th) interpolation scheme used for MetUM. It has a positive impact on the accuracy but at additional cost. For N48 resolution it is about 8% more expensive than quasi-cubic interpolation.
2.2. Stability and accuracy
Unlike Eulerian advection schemes, the maximum time step for SL advection schemes is not limited by the maximum wind speed. This makes it feasible to stably integrate with Courant numbers (C = |U|Δt/Δx) greater than unity (Staniforth and Côté, 1991).
2.3. Semi-Lagrangian scheme in MetUM
An SL dynamical core was implemented in the MetUM in 2002 (Davies et al., 2005). Its main features are:
SL advection for all prognostic variables (except for density), with conservative and monotone treatments of tracers;
Eulerian treatment of the continuity equation for exact mass conservation;
optionally, different orders of interpolation up to quintic Lagrange. Interpolations to departure points are normally quasi-cubic but for moisture
variables quintic interpolation is used in the vertical to prevent the tropical lower stratosphere from becoming too dry.
3. Vorticity confinement:
The vorticity confinement scheme (VC) is a simple parametrization developed by Steinhoff and Underhill in 1994 to counteract the diffusion of relative vorticity (henceforth referred to as simply vorticity) (Shutts and Allen, 2007). At a practical level VC adds momentum tangential to the relative vorticity contours in a model level surface.
Despite the fact that the action of VC might inadequately change the radial distribution of vorticity, the role of VC is to help to maintain the integrity of vortex cores against the action of numerical diffusion.
As proposed by Steinhoff and Underhill (1994), VC acts through an extra term on the right-hand side of the momentum equation for the horizontal components, as illustrated in the shallow-water equation:
where is horizontal vorticity gradient (normalized); is diffusion and represents the effect of numerical approximation; ∇ϕ is pressure gradient; ε is tangential speed of the VC (see below); and k is orthogonal vector for the radial coordinate.
The VC term creates a horizontal force that is parallel to horizontal contours of vorticity and has a magnitude proportional to the modulus of the vorticity. The force is rotated clockwise at a right angle from the direction of the vorticity gradient. Figure 2 illustrates how VC velocity increments help to excite moving vorticity structures; VC increments are plotted as arrows on top of vorticity contours (Figure 2). Southwest of Iceland there is a positive cyclone (storm) affected by the VC velocity increment. Its lower side is accelerated eastwards and its upper side westwards. Its east side is accelerated northwards and its west side southwards. All these contributions combined cause a higher positive rotation (counterclockwise) of the eddy, enhancing its rotational strength.
The parameter ε controls the strength of the confinement term and acts as a type of anti-diffusive velocity directed up the vorticity gradient. In our experiments, we have set an empirical value of ε = 0.6 as a first estimate. Some early runs, in which ε was proportional to the amount of kinetic energy dissipated, proved to be very unstable for the model. Future versions will explore different formulations for ε whose value is dependent on the local flow, such as described by Hahn and Iaccarino (2009).
To our knowledge, VC has never previously been applied in a GCM. However, this technique is widely used for several computational fluid dynamics applications such as aerodynamics engineering (Steinhoff et al., 2005) or computer graphics visualization (Selle et al., 2005).
4. Experimental set-up
A set of AMIP (Atmosphere Model Intercomparison Project; Gates and Boyle, 1999) climate simulations of the MetUM has been performed at N216, N96 and N48. This set was supplemented by two extra simulations at the low-resolution N48; in the first, quintic interpolation was used instead of the quasi-cubic interpolation (the control); and in the second quintic together with the VC scheme (Eq. (3)) were used. All these experiments ran for 20 years (1981–2001), forced by monthly means of sea surface temperature (SST) and sea ice. Due to the multiple requirements of the output from the computationally expensive N216 simulation, this uses a subtly different experimental design with time-evolving greenhouse gases concentrations instead of the fixed values typical of the 1980s used in the N96 and N48 runs. The impact of this difference in the representation of TEKE is minimal and does not alter the results of the present study.
Model mid-latitude variability is evaluated through a set of metrics and compared with the Modern Era Retrospective analysis for Research and Applications (MERRA) (Bosilovich et al., 2008) reanalysis climatology. MERRA uses the Goddard Earth Observing System Data Assimilation System Version 5.2.0 (GEOS-5.2.0) run at 1/2° latitude by 2/3° longitude and its time span is from 1979 to the present. We also use the observed radiative flux climatology provided by Clouds and the Earth's Radiant Energy System CERES (Wielicki et al., 1996), as well as the Global Precipitation Climate Project (GPCP) (Adler et al., 2003) precipitation climatology. A set of metrics is used to look at specific features of the mid-latitude variability such as the development of the baroclinic instability, blocking frequency and storm track statistics. These techniques are now described in more detail.
4.1. Reading University Tracking Method (RUTRACK)
RUTRACK is a Lagrangian method to isolate and track individual cylones and anticyclones and combine the information from each one to provide information about the storm track features (Hodges, 1994, 1995, 1996). For this study relative vorticity at 850 hPa has been chosen for tracking because it is less influenced by large-scale phenomena and allows systems to be identified much earlier in their life cycle (Hoskins and Hodges, 2002). In order to remove noise and planetary-scale influence over synoptic-scale features, a filter which removes wave numbers lower than 5 and greater than 42 is applied to the field.
Cyclones are identified as a local maximum or minimum when compared to the surrounding 24 points of the vertical relative vorticity field at 850 hPa. The tracking of individual systems is performed by minimizing a cost function for the ensemble track smoothness. Track ensembles are computed for each season and systems shorter than 2 days and displaced less than 1000 km are removed.
4.2. Blocking index
Blocking episodes give rise to some of the extremes of weather experienced at mid latitudes. They are associated with the interruption of the mid-latitude westerly jet and the blocking of mobile mid-latitude weather systems. These are diverted towards the polar latitudes and the predominant westerly winds are replaced by easterlies. Blocking is an example of an emergent phenomenon implicitly driven by the dynamical and physical processes in the model, and thus it is a useful test of the ability of the model to represent the atmosphere (Hinton et al., 2009).
In order to quantify the frequency of blocking events, the blocking index used follows Tibaldi and Molteni (1990). This technique has been widely used for blocking studies such as Hinton et al. (2009) or Scaife et al. (2010).
5. Analysis of VC impact on an idealized case
The impact of VC in the global circulation is unknown since it has never been applied in a GCM context. In order to determine how VC may impact the zonal circulation, we have carried out the idealized Held–Suarez test (Held and Suarez, 1994) of the dynamical core, for N96, N48 and N48 with VC, all with the same interpolation scheme to the departure points.
The impact of VC on the zonal circulation of the Held–Suarez idealized case is shown in Figure 3, where latitudinal profiles for zonal wind, TEKE and meridional fluxes of zonal momentum U′V′ are shown. The Held–Suarez test produces a reasonably realistic zonal mean circulation, as observed in the zonal winds (Figure 3(a)). All our experiments show the same pattern but N48 VC produces a minor difference in the transition zone from the Tropics to the Extratropics, about 20–30°, where winds are lower by 2 m s−1 than in the N96 and N48 experiments. As discussed in the next section, TEKE decreases notably from N96 to N48 (Figure 3(b)) and VC performs the role it has been designed to by halving the TEKE gap for the N48 version. U′V′ profile (Figure 3(c)) shows a gap between N48 and N96 in the mid latitude. This deficiency of momentum transported poleward is caused by weak and fewer synoptic cyclones. VC does not show any noticeable change on the poleward transport of zonal momentum.
There is an undesired effect: VC accelerates stratospheric tropical easterly winds up to twice the speed simulated by N48 and N96 (not shown). These might be driven by the fact that with ε fixed VC forces large-scale tropical waves despite these structures being well resolved. This bias disappears when we use a preliminary version of VC where vorticity is computed using a velocity field whose zonal mean has been subtracted. Future work will include more extensive testing of this modification.
Analysis shows that the divergence of the VC term does not vanish. As a result, it has the potential to impact the geostrophic balance by creating too much divergence. In order to study this problem we have made use of a similar implementation of vorticity confinement in the ECMWF IFS (Integrated Forecasting System) available since cycle 36r1 (introduced in 2010) but not activated for operational forecasting. A variant of this scheme has also been developed that projects the momentum increments from the scheme into vorticity and divergence increments and only uses the vorticity increments to force the flow. The implementation of this is made easy by the existence of powerful two-way, spectral-to-grid-point transforms that—in a single subroutine call—transform grid-point momentum tendencies into spectral vorticity and divergence tendencies. This gives one the flexibility to run forecasts both with and without the divergence forcing implied by VC. As well as assessing the relative magnitude of the contribution of the divergence forcing to VC, it is also interesting to examine the impact of VC on short-range forecast error at a resolution comparable with N48. To this end, 61 five-day T95 (91-level) forecasts were carried out with VC using both the total (vorticity + divergence) forcing and vorticity forcing alone. These were compared with forecasts that used the default 37r2 branch of the IFS and with forecast accuracy determined by the root mean square (RMS) difference between the analysed 500 hPa geopotential height (Z500) and that of the forecast. The dependence of the RMS Z500 error on forecast range is shown in Figure 4 for each day into the forecast. It can be seen that both forms of VC forcing give a similar reduction of forecast error out until day 3, with the full VC forcing having the slight advantage. After day 3 the control forecast gives somewhat lower RMS error. This could be the effect of having more energetic eddies in the VC forecasts, leading to an additional penalty when phase error becomes substantial.
In order to assess the statistical reliability of this result, Figure 5 shows the way the Z500 RMS error at T + 48 hvaries with the number of forecasts used to compute the mean. It can be seen that 30 days is essentially sufficient to have confidence in the above inference concerning forecast accuracy. Therefore, it is concluded that the divergence forcing contribution to the vorticity confinement scheme has minimal impact on forecast evolution.
When the extremes in the vorticity field are advected or created by vertical motion or baroclinic disturbances, temporal variability in the wind field is created. Such variability is squared in the computation of the TEKE and its latitudinal distribution peaks in mid latitudes because of the increase of eddy momentum poleward transport in the Ferrel cell. N48 shows a strong deficiency of TEKE compared to the other two resolutions (Figure 6). Peak values in the N48 simulation are about 3/4 of those in higher-resolution simulations. Using quintic interpolation improves this deficiency in the SH, and VC in addition to the quintic interpolation pushes the TEKE peaks towards the higher-resolution versions substantially: differences between N216 and N48 quasi-cubic peaks are more than halved by N48 quintic + VC in both hemispheres. Boreal winter has been chosen because it shows the largest differences in TEKE for N48 in both hemispheres. Since the simulation at N216 and N96 are fairly similar in terms of mid-latitude eddies, the remainder of the paper will only consider differences between N96 and N48.
6.2. Storm tracks features
In order to further understand the consequences on the mid-latitude variability caused by the inability of low-resolution models to adequately reproduce the eddy variability, several metrics for the storm track have been computed using RUTRACK for both hemispheres. Both positive and negative vorticity centres are tracked (cyclones and anticyclones, respectively, for the Northern Hemisphere). The reduction in strength of these centres at N48 compared with N96 is illustrated in Figure 7 for Northern Hemisphere anticyclones in the boreal winter. The strength is the magnitude of spatially filtered relative vorticity at the centre of the cyclone: the higher the strength, the quicker it spins and the more intense it is. Note that the strength is positive for both cyclones and anticyclones, as is the value of spatially filtered vorticity used for tracking. There are two main regions where the strength of the anticyclone peaks. These correspond to the central location of the Pacific and Atlantic storm tracks (Figure 7(a–d)). Compared to MERRA, MetUM N96 anticyclones are weaker at the northern edge of the Atlantic storm track across the Atlantic side of the Arctic Ocean and in the Pacific storm track (Figure 7(e).
The decrease of horizontal resolution from N96 to N48 results in weaker anticyclones everywhere, having a strong impact on the edges of both oceans' storm tracks, especially at the east side of the Atlantic storm track over Scandinavia and west of the Iberian peninsula (Figure 7(d)). Despite this general decline, storm tracks are not displaced when resolution decreases (in contrast to some earlier studies with a previous version of the Hadley centre climate model; Pope and Stratton, 2002).
When quintic interpolation is used, anticyclones are stronger, partially removing the most prominent biases at N48 (Figure 7(g)). VC strengthens them even further, especially over land in areas such as the east coast of North America, Scandinavia and the Iberian peninsula (Figure 7(b, f)).
Upper tercile means of various RUTRACK statistics for winter are shown in Tables 1 and 2. All metrics show a general decline when horizontal resolution decreases from N96 to N48. The high diffusivity of synoptic eddies at low resolution translates into fewer (∼20% less) and less powerful (∼15% weaker) storms. The lack of TEKE makes the storms unable to form and so the genesis density decreases as well.
Table 1. Mean of upper tercile for storm track statistics, computed by RUTRACK. DJF averaged in the NH.
When quintic interpolation is used the density of storms and anticyclones increases, although for speed and strength this increment is marginal or negative. The addition of VC shows a clear benefit in the generation and maintenance of cyclones in the Northern Hemisphere. It also invigorates storms and anticyclones, making them faster (in the Northern Hemisphere) and stronger (both hemispheres)—a logical consequence of its anti-diffusive formulation. However, VC decreases the travelling speed of storms for reasons that are not yet clear.
The winter statistics of large-scale blocking frequency for the Northern Hemisphere show two maxima corresponding to the ends of the Atlantic and Pacific storm tracks (Figure 8). MetUM performs well in the Atlantic blocking frequency for resolutions of N96 and higher; however, it underestimates the number of blocking episodes in the Pacific.
N48 shows a decay of blocking activity in both regions, decreasing its occurrence by approximately a third in the areas where blocking is more likely. All N48 experiments also show a decay in the blocking activity in the West Pacific at about 135E. N48 experiments show an unrealistic drop in the West Atlantic at about 20 W.
The low-resolution simulation with the quintic interpolation scheme produces a better blocking frequency, outperforming N96 in the Atlantic and reaching the same peak value for the Pacific. However, it shows an unrealistic peak in the West Atlantic and a decrease in blocking events in the West Pacific. VC increases the Atlantic blocking frequency up to levels that seem excessive. It also displaces the Pacific maximum frequency eastwards. This increase of blocking events could be associated with the decrease of the speed of cyclones; slower cyclones are more stationary and can be registered as short-lived blocking events by the Tibaldi and Molteni methodology.
6.4. Poleward heat transport
The poleward heat transport is intrinsically connected to the mid-latitude eddies which transport heat polewards. The poleward transport of heat, calculated from the zonally averaged heat fluxes at the top of the atmosphere minus heat fluxes at the surface (Zhang and Rossow, 1997), peaks at the mid latitudes where the poleward heat transport is higher. Table 3 shows the maximum (minimum) value of transport heat flux which represent heat transported northwards (southwards). There is a strong decrease of transport for N48 compared to N96. However, the VC with quintic experiment improves substantially the poleward energy transport, surpassing N96 values.
Table 3. Maximum and minimum value of the poleward transport energy flux.
7. Robust features of the model hierarchy
It has been illustrated how weak mid-latitude eddy activity has a severe impact on the representation of major mid-latitude features such as cyclones. However, some fields which are not directly linked to dynamics could still be well simulated in the control (quasi-cubic) N48 version of the MetUM. This is the case for cloud fields, for example short wave (SW) radiation at the top of the atmosphere (TOA) (Figure 9). In general there is a good agreement between N48 and N96. However, there are a small number of regions where N48 biases are worse, One example is the Peruvian coastline, where low resolution poorly resolves land–sea interaction due to the coarse representation of the steep gradients between the coastline and the Andes, and over the Himalayas. Despite these particular areas, radiative biases across much of the globe do not vary drastically with resolution. Similarly, a comparison of daily total precipitation rate of N48 and N96 is compared against GPCP (Figure 10). It shows similar biases across resolutions with an excessively wet intertropical convergence zone (ITCZ).
There are growing and urgent societal demands to understand, predict and mitigate future changes in the Earth system climate as well as a need to better understand past climates and transitions amongst these. These demands, which are the scientific challenges of our time, are not yet viable to be addressed by the highest-resolution models because of their unaffordable computational cost. However, lower-resolution models, which require less computational resources, could be a useful tool to confront these tasks if they can be shown to adequately represent the processes simulated by their higher-resolution counterparts.
A large proportion of the new generation of general circulation models such as the MetUM employ the semi-Lagrangian advection scheme, because of its important benefits. It removes the CFL restriction, and allows models to have longer time steps and thus become computationally cheaper, while maintaining an accurate treatment of advection. Nevertheless, this option is not very appropriate for low-resolution climate models, since it smoothes features which are comparable to the grid scale due to its high diffusivity. At N48, these are typically synoptic features and so there is a severe impact on mid-latitude variability.
In this paper, we have quantified the effect of this damping on synoptic variability. The MetUM climate hierarchy has been used and N48, the lowest resolution, has been found to poorly represent mid-latitude variability. The TEKE peak over the mid latitudes produced by N48 is about that obtained with higher-resolution versions. Storm track statistics show that the low-resolution cyclones are deficient in a variety of metrics: there are fewer (approx. 80% of the higher resolution in the Northern Hemisphere), weaker (85%) and slower (95%) storms. The frequency of blocking events reduces as well in the most preferred regions for blocking. The lack of mid-latitude variability also has a negative impact on the poleward energy transport, provoking the emergence of a strong surface pressure bias in the Arctic.
Two different solutions have been explored. A change in the advection scheme of the dynamical core of the N48 configuration has been found to have a positive effect on the representation of the mid-latitude variability, but makes the model about 8% slower. This change is the replacement of quasi-cubic interpolation by quintic interpolation in the semi Lagrangian scheme. The other solution is the additional use of the Vorticity Confinement (VC) scheme, a parametrization designed to avoid the excessive diffusion of relative vorticity by the advection scheme. This combination has been found to be very positive in certain metrics such as the strength of the mid-latitude cyclones or poleward energy transport, although it provokes undesired impacts in the tropical stratospheric winds. VC computational cost is fairly marginal; it is less than 1% more expensive than the quintic scheme.
A more accurate interpolation scheme for the lower resolution produces less diffusion of vorticity fields and therefore a higher TEKE (5% higher in the SH). This slightly improves storm track metrics such as track density or genesis density. However, the improvement in the blocking frequency is notable, with the N48 quintic model being able to reach or even surpass N96 values in the regions with the higher occurrence of blocking events. On this basis, the improvements from using the quintic scheme at N48 may be seen as justifying the additional cost.
The VC scheme counteracts the numerical dissipation of sub-synoptic scale eddies. An idealized Held–Suarez test reveals that VC alone can halve the bias in TEKE between N96 and N48 with no major impact on the tropospheric zonal circulation. Working alongside a quintic interpolation scheme in a GCM context, it has a clear benefit for the model, halving the N48 quasi-cubic biases in TEKE when compared to MERRA reanalysis. This increases all the storm track metrics evaluated except cyclone speed, halving the biases in many cases and remarkably improving the anticyclone track metrics. VC pushes the frequency of blocking events in the NH over the high-resolution values, as well as for the poleward transport of heat.
Despite the benefits observed by VC, further development and evaluation are needed to decrease the negative impact seen on the tropical stratospheric horizontal winds, as well as to fully understand why it slows down the speed storms. We have used a fixed value for the parameter ε, whose value has not been carefully estimated. Additionally, this VC set-up perturbs well-resolved large-scale phenomena as well as small-scale ones. It is thus desirable to construct a flow-dependent ε that might implicitly target particular scales poorly represented, such as synoptic cyclones at N48.
The traceable solutions described here have been shown to be very useful in increasing the mid-latitude variability of low-resolution climate models, making them potentially usable for computationally expensive simulations. However, the simulation remains slightly inferior to that of N96, so use of higher resolutions for mid-latitude variability studies is still recommended when affordable. Studies covering non-dynamical parameters such as clouds or water cycle still appear reliable in low-resolution models, since radiation and precipitation biases do not show such a strong dependence on the resolutions explored here.
There is hope to further improve the performance of the low-resolution version of the MetUM by the use of tools designed to prevent excessive dissipation (i.e. VC) or explicit forcing of subgrid variability, whereby unresolved or highly diffused small-scale processes may be fed by upscale energy cascades. Another innovative way to deliver improvement at low resolution may be through the use of stochastic physics schemes such as the stochastic kinetic energy backscatter scheme (Shutts, 2005). It stochastically injects rotational kinetic energy to offset the high numerical diffusivity of the SL schemes, although tests to date show little impact of this particular aspect on climate simulations. Future work will aim to study how low-resolution climate models may benefit from these stochastic and anti-diffusive schemes.
Thanks to Kevin Hodges for the use of his TRACK code and Andy White for creating Figures 1 and 2. The author is supported by Joint DECC/Defra Met Office Hadley Centre Climate Programme (GA01101). Finally, the authors acknowledge the helpful and insightful comments of two anonymous referees.