We investigate changes in the probability density functions and the probability of moderate extremes for maximum and minimum daily temperature anomalies in Italy from 1951 up to 2008. Evaluation of trends in time-varying percentiles and higher-moment analysis of empirical density functions give no evidence of long-term changes in scale or shape of daily anomaly distributions, their temporal evolution being essentially driven by a forward, nonuniform shift in the mean. In this context, on the basis of an appropriate theoretical model for daily anomalies, we provide a realistic representation of the temporal evolution of moderate warm and cold extremes by explicitly considering the inherent nonlinearity between changes in the mean and those in exceedance probabilities. Consistency between expected and observed exceedance probabilities suggests that changes in moderate extremes can be well understood with a simple, rigid shift of the density functions alone, without invoking any change in shape.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 In practice, analysis of extreme events is hampered by several factors related to data availability, data quality, and methodological issues. Available long-term daily records largely vary from region to region, and their coverage is generally smaller than that of monthly resolved series, both spatially and temporally. Data homogeneity adds further limitations, since a well-established methodology for identifying and correcting breaks on a daily basis is still lacking [Brunetti et al., 2004; Kuglitsch et al., 2009].
 The methodological basis of extremal analysis depends strictly on the type of weather extremes that are being investigated. Parametric approaches based on extreme value theory are currently used in the analysis of multiyear return period events [see, e.g., Zhang et al., 2004; Nogaj et al., 2006; Della-Marta et al., 2007; Brown et al., 2008]. Nogaj et al. , for instance, investigated the temporal evolution of the amplitude and frequency of extremes in high and low temperatures over the North Atlantic region through the parameters of a generalized Pareto distribution (GPD). Quite similarly, nonstationarity in daily temperature anomalies exceeding a high (or low) threshold was analyzed at a global scale by Brown et al.  by fitting a GPD with time-varying parameters. Also, a generalized extreme value distribution was used by Della-Marta et al.  to model extremely warm daily summer maximum temperature and assess changes in their frequency over Europe since 1880.
 On the other hand, to overcome excessive statistical limitations inherent to extremal analysis (due to the obvious smallness of the samples to be analyzed), a number of climate studies have focused on moderate extremes only, defined as being events with a return period in the order of weeks. In these cases, the annual number of events ensures a reliable trend analysis over 50 years [Klein Tank and Konnen, 2003], and empirical (model-free) techniques are adequate [Ferro et al., 2005]. The analysis of this broader class of extremes, for daily temperatures in particular, is in some cases tailored to the estimate of mutual rates of change in the outermost (10% or 5%) tails of the probability density function, to hint at potential variations in time of distributional characteristics of daily temperatures, such as scale and/or shape [e.g., Klein Tank and Konnen, 2003; Alexander et al., 2006; Toreti and Desiato, 2007].
 A common benchmark for climate change analyses focusing on moderate extremes is provided by the suite of indices for surface data delivered by the joint working group on climate change detection of the World Meteorological Organization-Commission for Climatology (WMO-CCL) and the Research Programme on Climate Variability and Predictability (CLIVAR) [Peterson et al., 2001]. On this basis, a number of analyses have been undertaken in recent years to provide global pictures of changes in daily temperature and precipitation extremes [Frich et al., 2002; Alexander et al., 2006; Trenberth et al., 2007].
 As far as temperature is concerned, the comprehensive work of Alexander et al. , updating the results of Frich et al. , highlighted widespread significant changes throughout the twentieth century, especially for those indices related to daily minimum temperature (TN) and, to a lesser extent, daily maximum temperature (TX). In particular, a significant increase in the annual occurrence of warm nights and a corresponding decrease in the annual occurrence of cold nights were found over much of the global land area sampled. Other analyses, since about 1950 (see for instance Robeson , Griffiths et al. , Brown et al. , and Peterson et al. ) or throughout the last century [e.g., Moberg et al., 2006], further support a general increase of warm extremes and a consistent decrease of cold extremes, on a global and regional scale, although warming has not been uniform across the globe, either spatially or temporally. Studies focusing on western and central Europe report an overall warming of both TX- and TN-related indices during the twentieth century, with stronger trends for winter than for summer [Heino et al., 1999; Klein Tank and Konnen, 2003; Moberg et al., 2006; Della-Marta et al., 2007; Kurbis et al., 2009]. Among these works, Klein Tank and Konnen  highlighted discrepancies between changes in the upper and lower tails of daily temperature distributions over the last 25 years, with warm extremes increasing faster than expected by considering the corresponding decrease of cold extremes, thereby suggesting an increase in temperature variability. Few studies concern changes in climate extremes over the Mediterranean area [e.g., Kostopoulou and Jones, 2005; Brunet et al., 2007; Toreti and Desiato, 2007; Kuglitsch et al., 2010], and in some cases they document different tendencies compared to global trends, with generally stronger trends in daytime rather than nighttime temperatures. Long-term changes over the Iberian Peninsula, for instance, have been found by Brunet et al.  to be more associated with higher increasing rates in TX rather than TN, the most significant warming episode being recorded from the early 1970s onward. These findings agree with the results of Brunetti et al.  concerning monthly maximum and minimum temperature trends in Italy, where the biggest changes over the second half of the twentieth century were detected for daytime rather than nighttime temperature. Similarly, Kuglitsch et al.  detected stronger trends in the temperature of hot summer days rather than nights over the eastern Mediterranean area since 1960.
 As a whole, a pattern of changes consistent with a general warming has clearly emerged, even though TX and TN distributions are expected to vary in a nontrivial way. Observed variations in the probabilities of warm and cold extremes indeed may result from the interplay between a shift in the mean and changes in the scale (and/or shape) of the underlying distribution. Any of these changes can induce quite large effects in the probability of extremes (moderate ones included), because of the intrinsic nonlinearity of the relationships between variations in the first and higher moments of the distribution and those induced in the associated probabilities [e.g., Mearns et al., 1984]. The role of variability, in particular, is considered to be prominent in explaining observed changes in extremes, with a greater sensitivity the more extreme the event is [Katz and Brown, 1992]. Therefore, potential changes in the variance as well as in the distributional shape have often been addressed to understand the enhanced occurrence of extremely warm events [see, e.g., Schaer et al., 2004; Scherrer et al., 2005; Griffiths et al., 2005; Della-Marta et al., 2007; Yiou et al., 2009; Kurbis et al., 2009]. Clear evidence for such changes, however, is still sparse and strictly dependent on the adopted approach.
 In this work we analyze climate change–induced variations on TX and TN distributions over Italy, from 1951 up to 2008, with the aim of identifying the nature of changes in the probability density functions and the probability of moderate extremes. This issue is sometimes addressed by comparing observed trends in the upper and lower tails of the distribution with those expected from a simple rigid shift. Here we take a different approach. Specifically, we first empirically investigate the time evolution of TX and TN probability density functions by analyzing the rates of change from all the portions of the distributions. Further, we outline the basic properties of the density functions and changes in the first and higher moments by using L-moment statistics [Hosking, 1990]. Secondly, we determine changes in the probability of moderate temperature extremes by studying the long-term behavior of a subset of percentile-based indices selected from the complete CLIVAR suite [Peterson et al., 2001; Alexander et al., 2006].
 Finally, on the basis of an appropriate model of daily temperature anomalies, we assess whether the time evolution of moderate warm and cold extremes can be explained by a simple, rigid shift of the distribution, or whether variations in scale and shape need to be invoked.
 The paper is organized as follows. Section 2 presents data and their processing. Section 3 contains basics of the analysis methods and outlines the results, i.e., long-term transformations of daily temperature distributions and changes in the index time series for moderate temperature extremes. Section 4 discusses these results, and section 5 concludes and summarizes. Some technical material is included in the appendices.
 The analysis presented in this paper is based on a set of 67 TX and TN station records (Figure 1) for a 58 year period (1951–2008). Data were selected from the Italian Air Force (Aeronautica Militare, hereinafter AM) climatic data set (SYREP data set), which includes 164 stations, according to the record length and completeness level (about 92% on average for both TX and TN) and a reasonable spatial coverage of Italy.
 Raw data were subjected to a preliminary quality check to eliminate single isolated errors (outliers) from the series. Specifically, when a TN, TX, or daily temperature range value turned out to be suspect when compared with its statistics (e.g., too high, too low, or with no variability in a set of consecutive days), its reliability was checked by comparison with neighboring stations and the synoptic AM data set (with eight SYNOP messages per day taken at 3 h regular intervals). When necessary, the SYREP data were simply replaced with the minimum/maximum of the synoptic observations (less than 0.01% of data were replaced with synoptic observations).
 Once all records had passed this preliminary quality check, both TN and TX records were independently analyzed for homogeneity. Homogeneity testing was performed with the same procedure as described in Brunetti et al. . Here it was adapted to daily resolution data by fitting correcting factors with a trigonometric function, similar to Brunetti et al. .
 To avoid biasing the computation of mean regional series by missing data, after the homogenization procedure, the data were converted to anomalies with respect to the 1961–1990 normal values. To do this, at least the 1961–1990 period should be not affected by missing data. All the gaps in the 1961–1990 period (this is the least problematic period as far as missing data are concerned, where they amount on average to less than 3%) were filled in using the three closest stations as reference series. For each missing value in the target station three potential reconstructions were performed by considering the corresponding value from the reference series, appropriately shifted to the mean of the target series, assuming constant differences between two neighboring series within a time interval. Mean values for the target and reference series were computed over the same time window centered on the day of interest (30 days × 3 years). Finally, to avoid potential biases from unidentified outliers, the median of those three estimates was used to fill in the gap.
 After data completion, temperature normals were computed over the 1961–1990 standard period. For each record the raw cycles were then smoothed using a trigonometric function as a low-pass filter (the first four harmonics were cautiously taken into account to avoid a too coarse smoothing).
 The station anomaly records were then used to obtain average complete daily records for the two main climatically homogeneous Italian areas shown in Figure 1. Such areas were identified on the basis of the results of principal component analysis applied to the correlation matrix of monthly series. Results are very similar for TX and TN (see Figure 1) and show that only two components explain more than 5% of the variance of the station data set. After a varimax rotation, in both the TX and TN cases, the first component is associated with northern Italy (N), and the second one is associated with southern Italy (S), the only difference between TX and TN being that the first and second components together explain 79% and 77% of the total variance in the two cases, respectively. Thus, stations were assigned to the two climatically homogeneous regions on the basis of their loadings, as displayed in Figure 1 [see, e.g., von Storch and Zwiers, 2001]. Finally, all the information contained in the data was summarized into four daily records: two series (TX and TN) for the northern Italy region (N) and two corresponding series for the southern Italy region (S). Besides these series, in the paper also two Italian (IT) national average series are considered.
 As a final check, we compared the annual national TX and TN series with corresponding series extracted from the data set of the Italian secular records presented in Brunetti et al. . This data set, which is based on 48 records, is completely independent from the records used in this paper, and the two data sets were subjected to independent homogenizations, i.e., none of the secular records was used for identifying or correcting breaks in the AM data set (see Brunetti et al. [2006, 2004] for further details about the homogenization procedure). Over the common period (1951–2003) the differences between the two annual series have a standard deviation of 0.1°C, and the two series correlate at 0.97, both for TN and for TX. Also, the trends evaluated from the different data sets over the same period are highly consistent.
3. Time Evolution of Probability Density Functions of Temperature Anomalies
 One of the primary objectives of this work, as already noted, is to elucidate the relationships between variations observed in the warm and cold moderate extremes and those observed in the first and higher moments of daily temperature distributions. To determine the nature of changes in daily anomalies, distribution characteristics beyond the central tendency need to be investigated. Hence, as will be discussed in section 3.1, a twofold, model-free analysis of TX and TN anomaly density functions was carried out, using empirical quantiles and sample moments. Then, section 3.2 reports the study of exceedance probabilities, i.e., warm and cold moderate extremes, based on conventional percentile indices. Linear trend analysis of index time series was performed to outline general increasing/decreasing tendencies only, thereby enabling comparisons with related works. On the other hand, understanding such changes on the basis of long-term transformations of daily temperature distributions necessarily requires explicit consideration of the nonlinear evolution of exceedance probabilities caused by a change in the first and/or higher moments, as will be discussed in detail in section 4.
3.1. Time-Varying Percentiles and Higher Moments
 Potential changes in scale and shape of the six anomaly density functions were empirically analyzed by using data order statistics (sample values sorted into ascending order). First, rates of change from all the portions of the TX and TN anomaly distributions were individually analyzed, to ascertain whether relative drifts of equally spaced quantiles (with respect to the median) can be singled out and, if so, where the major shape variations localize [see, e.g., Robeson, 2004; Caesar et al., 2006; Moberg et al., 2006]. Specifically, for each spatial average, 19 percentiles from the 5th up to the 95th, along with the most extreme 1st and 99th percentiles, were extracted from the daily anomaly distributions of each year, by linearly interpolating the two data values closest to the examined percentile [see, e.g., Zhang et al., 2005]. Linear fits to the time series of percentiles were subsequently performed by ordinary least squares regression over the period 1951–2008. Statistical significance was assessed via the nonparametric Mann-Kendall test. Relative trend values, i.e., deviations of the trend in the pth percentile from the trend of the median, are shown in Figure 3 for both TX and TN, where error bars include the 95% confidence interval. Trends are significant at least at the 5% level (p values below 0.05) for all the percentiles with few exceptions (open symbols in Figure 3).
 The relative trend slope of each percentile is clearly consistent with zero. As a result, no significant relative drift of different parts of the distributions, both TX and TN, can be traced, thereby suggesting a shape-preserving translation.
 This picture is also reflected at a seasonal level; that is, a uniform warming is observed in all parts of the seasonal distributions, for both TX and TN. Indeed, results from the above analysis carried out for each season separately (not shown) reveal that trends in individual percentiles of TX and TN distributions are all consistent with trends in the median within error bars, with major contributions to this warming from the summer season (as expected on the basis of results summarized in Figure 2).
 These findings are corroborated by a moment-based analysis of daily temperature distributions. In this context we used the so-called L-moments (defined to be expectation values of certain linear combinations of order statistics; see Appendix A) to measure dispersion, skewness (departure from symmetry), and kurtosis (deviation from a normal curve) of anomaly distributions. Estimating shape parameters by means of conventional moments may fail to give accurate results, especially when small, highly noisy data sets are involved. In contrast, L-moments are less prone to the effects of sampling variability and more robust to the presence of outliers, providing a powerful tool for discriminating among sample distributions [Hosking, 1990; von Storch and Zwiers, 2001].
 The first four L-moments λr, r = 1, …, 4 (definitions and basic properties are given in Appendix A) summarize the main features of observed data samples. In particular, analogously to central moments, λ1 and λ2 may be regarded as measures of location and scale, respectively, and the moment ratios τr = λr/λ2, r = 3, 4, as measures of skewness and kurtosis respectively. Their sample estimates (see Appendix A) 1, 2, 3 and 4 were thus computed for each year of all the TX and TN anomalies, and trend analysis of the related time series was subsequently performed.
 Results revealed neither upward nor downward tendencies in the temporal evolution of any of these coefficients other than the mean 1, and, in particular, no significant increase of data dispersion could be distinguished from random fluctuations (with p values of significance tests always greater than at least 0.4 for 2, and 0.2 for 3 and 4, thereby largely outside the rejection region, for all the examined series).
 The strong rise in the first moment 1 (see yearly results in Figure 2), which is more pronounced for TX than for TN, is better reproduced by a second-order polynomial curve rather than a straight-line fit, as witnessed by the associated uncertainties (see Figure 4). The latter fit, in particular, cannot account for the moderate cooling observed in the first 3 decades, as is clear from Figure 4. Temperature decrease over this subperiod, however, is statistically significant in two cases only, i.e., for the IT and the S series of TX, with rates of about 0.1 K/decade and p values just below 0.05 and 0.1, respectively.
 Hence, the temporal evolution in daily anomaly distributions seems to be essentially driven by nonuniform shifts in the mean, whereas the absence of long-term variations in scale and shape is clearly confirmed.
 As a by-product, information conveyed by L-moments analysis puts forward small but nonnegligible departures from normality, as revealed by a glance at Figure 5. Here the L-moment ratios of the annual distributions of TX and TN anomalies are displayed in the (3, 4) plane, together with their long-term averages (solid squares). Comparison with the Gaussian values of L-moment ratios (3 = 0, 4 = 0.1226) highlights a general tendency of daily anomalies to be left-skewed, more marked for TN than TX. Further details on this issue are given in the Appendix B.
3.2. Changes in Exceedance Probabilities
 The temporal evolution of moderate warm and cold extremes is usually investigated by evaluating exceedance probabilities, namely the probability of events exceeding (not exceeding) given thresholds. As discussed by Klein Tank and Konnen , percentiles should be preferred over absolute (fixed) thresholds, since, being site specific, percentiles allow for spatial comparisons. In this work, we selected a subset of percentile-based indices (TX10, TX90, TN10 and TN90) among the complete CLIVAR suite [e.g., Alexander et al., 2006], defined to be the percentage of days per year with anomaly exceeding (not exceeding) the corresponding long-term ninetieth (tenth) percentile, for both TX and TN. The warm and cold spell duration indices (WSDI and CSDI) were also evaluated by identifying those TX90 and TN10 events occurring over sequences of 6 days or more (run events). Finally, computation of the indices of concern was also restricted to the upper and lower 5% tails of the distributions (i.e., TX5, TX95, TN5, and TN95) to highlight contributions from the largest anomalies.
 In principle, long-term thresholds may be estimated by modeling observations with an appropriate theoretical probability density [see, e.g., Jones et al., 1999], the main shortcoming being the introduction of additional uncertainties from parameters fit. Here, as in a number of related studies [see, e.g., Klein Tank and Konnen, 2003; Zhang et al., 2005], threshold values were derived for each day of the year from empirical probabilities. Specifically, 31 consecutive daily observations pertaining to a moving window centered around the Julian day of interest were retained over the period 1951–2008, and empirical quantiles were then extracted from the associated order statistics as discussed in section 3. Note that we used the entire record period for threshold evaluation to prevent appearance of inhomogeneities at the edges of the baseline period [see Zhang et al., 2005]. Additionally, because of the largeness of the sample size (including 31 × 58 data), threshold values were found to be rather robust with respect to sampling variations. Finally, since day counts on a yearly basis were obtained as the sum of day counts on a seasonal basis, i.e., over December-January-February (DJF), March-April-May (MAM), June-July-August (JJA) and September-October-November (SON), the first meteorological year was discarded from all the analyses. Changes in the exceedance probabilities were subsequently explored by means of ordinary least squares regressions on the index time series.
 Trend analysis of TX90, TN90, TX10, and TN10 shows an increase of above-threshold events and a corresponding decrease of below-threshold events, indicating an overall warming, which is more pronounced for TX than TN (see Table 1). Linear fits were performed over the full period (1952–2008) and over the 1980–2008 subperiod. Trend slopes are reported in Table 1 as number of days per decade. The results have significance levels of 1% (p values below 0.01), with few exceptions.
Table 1. Linear Trends for the TX90, TX10, TN90, and TN10 Annual Series Over 1952–2008 and Over 1980–2008 for Spatial Averages IT, N, and Sa
Values are expressed as number of events per decade. The p values are lower than 0.01, except when followed by “b”, in which case p values are below 0.05. Statistical significance is assessed via the Mann-Kendall test. IT, Italy; N, northern Italy; S, southern Italy.
9.3 ± 1.5
21.9 ± 4.2
8.7 ± 1.3
18.4 ± 3.4
7.5 ± 1.5
18.7 ± 4.6
7.7 ± 1.4
21.3 ± 4.0
7.1 ± 1.3
19.1 ± 3.6
6.7 ± 1.3
16.5 ± 3.9
−5.4 ± 1.1
−8.9 ± 2.8
−5.2 ± 1.1
−9.7 ± 2.8
−5.2 ± 1.1
−8.6 ± 2.9b
−5.2 ± 1.0
−7.0 ± 2.8b
−4.9 ± 1.0
−8.0 ± 2.5b
−4.5 ± 1.0
−6.5 ± 3.0b
 As far as TX is concerned, linear trends over the full period show that increasing rates in TX90 are much higher than decreasing rates in TX10 for all the spatial averages. This discrepancy is markedly enhanced if trend analysis is restricted to 1980–2008, because of the more than double rate in the upper tail over this subperiod. Similarly, for the TN case, probability fluxes across the warm thresholds, estimated by linear trends over the full period, are higher than probability fluxes across the cold thresholds but to a slightly lesser extent than in the TX case. As discussed in detail in section 4, this is consistent with stronger warming trends for TX than for TN (Figure 3), since faster translations of the distribution tend to increase discrepancies between the two probability fluxes. However, asymmetries in TN90 and TN10 rates strongly rise when trend analysis is restricted to 1980–2008.
 Linear trends in the time series of TX95, TX5, TN95 and TN5, summarized in Table 2, show that asymmetries between rates in above- and below-threshold events are generally emphasized in the 5% tails, especially over the latest subperiod, even though reduction in sample size may weaken trend significance down to the 10% level (p values lower than 0.1) and below (see values marked by bb and n, respectively, in Table 2).
Table 2. Linear Trends in Time Series of TX95, TX5, TN95, and TN5a
The p values are lower than 0.01 for all the values except for those followed by a “b” (p values below 0.05) or “bb” (p values below 0.1). Values followed by an “n” are not significant (p value greater than 0.1).
5.8 ± 1.1
16.0 ± 3.1
5.5 ± 0.9
13.1 ± 2.6
4.3 ± 1.0
12.6 ± 3.0
4.6 ± 1.0
13.5 ± 3.0
3.9 ± 0.9
11.9 ± 2.7
4.2 ± 1.0
11.8 ± 3.0
−3.2 ± 0.7
−5.2 ± 1.6
−3.1 ± 0.7
−5.5 ± 1.6
−2.7 ± 0.6
−3.8 ± 1.5bb
−2.8 ± 0.8
−3.6 ± 2.0n
−3.1 ± 0.7
−4.2 ± 1.6b
−2.8 ± 0.8
−4.0 ± 2.1n
 Note also that, restricting the analysis to the first 3 decades only, i.e., from 1952 to 1979, a weak, opposite tendency is observed, with a slight increase in TX10/TX5 (TN10/TN5) events and a corresponding decrease in TX90/TX95 (TN90/TN95) events, which is due to the moderate cooling during this subperiod (see section 3.1). Trends are significant, however, only in a few cases: the decrease of TX90 events in the IT series (p values below 0.01) and the S series (p values below 0.05), the decrease of TX95 events in the IT and S series and that of TN95 events in the IT series (p values below 0.05). In the remaining cases p values are always greater than 0.1.
 Further insights into the temporal evolution of the TX warm tail and the TN cold tail are provided by the analysis of the warm (WSDI) and the cold (CSDI) spell duration indices, respectively. In addition, to highlight contributions to the yearly count arising from few but long spells, we readjusted counting indices by summing only the excess of TX90 and TN10 consecutive events beyond the fifth day. This is shown in Figure 6, where both the conventional WSDI and the modified WSDIex are evaluated for the IT series. The record-breaking year 2003 clearly stands out when the modified index is used.
 Trend analysis of the index time series was performed with regard to rates of changes in day counts (WSDI and WSDIex), variation in the frequency of sequences (NWS) and in the yearly maximum spell duration (WSmax). As expected, given the overall increase of TX90 probability, run events show positive significant trends in all the warm spell-related variables, as summarized in Table 3. The decrease of cold spells similarly concerns total yearly day counts and excess beyond the fifth consecutive day (CSDI and CSDIex), number of sequences per year (NCS) and their maximum duration (CSmax) (Table 3).
Table 3. Trend Values for All the Warm and Cold Spells Related Variablesa
Specifically, counting indices (warm spell duration index (WSDI) and cold spell duration index (CSDI)), excess in the counting indices (WSDIex and CSDIex), number of warm and cold spells per year (NWS and NCS), and yearly maximum spell duration (WSmax and CSmax), respectively. All values, expressed as number of events (number of spells in the case of NWS and NCS) per decade, are significant at the 1% level (p values below 0.01) except those followed by a “b” (p values below 0.05).
5.8 ± 1.2
2.7 ± 0.6
0.6 ± 0.1
1.8 ± 0.5
−2.1 ± 0.8
−0.9 ± 0.4
−0.9 ± 0.4
−1.1 ± 0.4
4.3 ± 1.0
1.8 ± 0.5
0.5 ± 0.1
1.7 ± 0.4
−2.1 ± 0.7b
−1.0 ± 0.4b
−1.0 ± 0.4b
−1.2 ± 0.4b
5.0 ± 1.1
2.4 ± 0.6b
0.5 ± 0.1b
1.1 ± 0.4b
−2.2 ± 0.8b
−0.7 ± 0.3b
−0.7 ± 0.3b
−1.0 ± 0.3
 Finally, seasonal analysis (not shown) of moderate extremes (TX/TN90 and TX/TN10 only) revealed that marked and significant contributions to TX and TN warming originate mostly from the warm season (JJA and, to a lesser extent, MAM) for all the analyzed series, consistently with seasonal trends depicted in Figure 2. Only few, moderately significant contributions (p values just below 0.05) emerge from DJF and SON for both TX and TN, and these mainly pertain to above-thresholds events (TX90 and TN90).
 Note that the above results concerning long-term behavior of exceedance probabilities agree rather well with those of a recent study [Toreti and Desiato, 2007] of temperature extremes over Italy in the second half of the twentieth century. In their work, the authors highlighted a sharp increase in the warming trend over the last 25 years, with much faster rates of change for warm (TX90 and TN90) than cold (TX10 and TN10) moderate extremes, consistent with our findings. In addition, larger and highly significant decreases in the number of cold days and corresponding increases in the number of warm days rather than decreases (increases) in cold (warm) nights (see Tables 1 and 2) were also reported for the Iberian Peninsula over the twentieth century by Brunet et al. ; the strong rise in the long-term Spanish records since about 1970, in particular, was found to be associated with greater increases for spring and summer temperatures compared to those for winter and autumn. Similarly, Kostopoulou and Jones  pointed out that the most significant trends since the late 1950s across the eastern Mediterranean basin were during summer, with a more marked increase in warm events than the decrease in cold nights and days. Furthermore, higher increasing rates in the temperature of hot summer days rather than nights since 1960 were detected over this region by Kuglitsch et al. .
 As discussed in detail in section 4, the degree of discrepancies between rates of change in the upper and lower tails can be well explained in terms of a rigid, nonuniform shift of the anomaly distributions, if nonlinearity between evolution of the mean and of exceedance probabilities is properly accounted for.
 As already noted, trends detected in the exceedance probabilities (section 3.2) provide clear evidence of a pronounced warming, more marked for TX than TN, essentially because of the latest decades. The differences outlined in the rates of change of the warm and cold moderate extremes reflect to some extent the shape of the underlying distribution. However, it is hard to draw definite conclusions about the nature of changes in the TX and TN anomaly distributions from direct comparison between linear trends in the exceedance probabilities. As a matter of fact, even for a rigid, uniform translation of a density function, the total probability for events above (or below) a fixed threshold varies nonlinearly in time for any realistic distribution. The same holds for the rates of change of exceedance probabilities, which are strictly related to the functional form of the density function and its translational velocity. This can be easily seen, for instance, for normal-distributed data, where the rate of change in the probability of above/below-threshold events scales (i.e., increases/decreases) exponentially with a positive shift of the distribution (see Figure 7). Thus, linear approximation to the exceedance probabilities is reliable at short time only or, equivalently, for very slow shifts.
 Bearing these considerations in mind, a realistic representation of the time evolution of moderate extremes may be attained on the basis of the results of section 3.1 by explicitly considering nonlinearity between a change in the mean and changes in the exceedance probabilities. This is illustrated in Figure 8, where the TX10 (left) and TX90 (right) time series are displayed for the IT series. Also shown in Figure 8 are the expected exceedance probabilities as functions of time, obtained by assuming the skew-normal density function [see, e.g., Azzalini and Capitanio, 1999]
as the underlying distribution of daily temperature anomalies. In equation (1), ξ, ω and α are a location, a scale, and a shape parameter, respectively; the latter allows for nonzero skewness, thereby accounting for the observed small departures from symmetry (further details are provided in Appendix B). The density (1) was fitted to the daily temperature anomalies, separately for each year and each TX and TN series and proved to be largely superior than the traditional normal density (see Appendix B). A rigid but nonuniform translation was assumed as time evolution law of the density (1), consistent with the findings of section 3.1. The location parameter is left to vary with velocity linearly dependent on time, and the scale and shape parameters are assumed to be equal to their long-term averages, i.e.,
Here t and t0 lie in the record period, and the coefficients ξi, i = 0, 1, 2 are derived from a polynomial fit of the mean as a function of time (see section 3.1). Hence, the exceedance probabilities are given by the time-dependent cumulative distributions
with p = 0.1, where the density fSN(y, t) is obtained by combining equation (2) and equation (1). The average percentiles, p and 1−p, are defined by the conditions F(p, t0) = 1 − F(1−p, t0) = p at a given time t0 to be determined by comparison with the data. Also shown in Figure 8 is the effect of varying the shape parameter α within the observed range, i.e., α± = ± 2σa where σα is the standard deviation of α computed over the set of annual distribution. The lower α is, the sharper (flatter) the change in the right-hand (left-hand) tail of the expected probability gets. On the other hand, by varying the scale parameter ω within the observed range (as before ω± = ± 2σω), both tails are similarly modified (not shown); e.g., a sharper increase (decrease) in the right-hand (left-hand) tail of the distribution is observed for ω_ = − 2σω.
 Straight-line fits, also displayed in Figure 8, clearly provide quite a poor representation of the evolution of moderate extremes. Indeed, the temporal evolution of exceedance probabilities basically reflects the nontrivial relationship between changes in the mean and probability of extremes, entailed by equation (3), which can hardly be fitted by a straight line. Besides, data skewness and nonuniformity in the shift of the mean provide further insights into observed changes and, in particular, the enhanced warming over the latest decades.
 Exceedance probabilities (equation (3)) for p = 0.05 also agree rather well with the observed TX5 and TX95 time series, as shown in Figure 9 again for the IT series. Similar results were found for TN and for all the examined series. For the TN series, in particular, a smoother pattern was observed in the temporal evolution of both upper and lower tails compared to TX, because of the slower shift of the TN location parameter (see Figure 4).
 Finally, the overall agreement between the expected and observed exceedance probabilities clearly shows that a rigid, nonuniform shift of the temperature density function alone provides an accurate interpretation of the evolution of moderate extremes in the analyzed period, without invoking any change in scale or shape.
 We analyzed TX and TN distributions over Italy from 1951 up to 2008 to identify the nature of changes in the probability density functions and probability of moderate extremes, induced by a warming climate.
 A difficult question in climate studies is explaining the relationships between changes in the moderate extremes and changes in the mean and higher moments of daily temperature distributions, either globally or locally. These relationships have long been known [Mearns et al., 1984] to be highly nonlinear, in that even a simple, linear increase in the mean may substantially affect probabilities of extremes.
 A common approach [e.g., Klein Tank and Konnen, 2003; Alexander et al., 2006; Toreti and Desiato, 2007] is analyzing linear trends in the time series of extreme temperature indices to infer, on the basis of approximate normality of daily data, potential changes in the distribution characteristics, especially variability, from discrepancies between observed rates in the warm and cold tails and those expected from a rigid, uniform shift. Here, reversing the point of view, we first determined the temporal evolution of daily temperature distributions using a two-fold, empirical approach, and we subsequently analyzed rates of changes in the moderate warm and cold extremes on the basis of conventional extreme temperature indices. Results may be summarized as follows. (1) Uniform warming of all parts of the TX and TN daily anomaly distributions clearly emerged from the analyses based on data order statistics, i.e., analysis of time-varying percentiles as well as first and higher L-moments, with no evidence for long-term changes in the shape or scale along with changes in the mean. (2) The temporal evolution of the anomaly density functions was found to be controlled by a nonuniform, essentially rigid translation of the mean, which is better reproduced by a second-order polynomial rather than straight-line fit. (3) Small but nonnegligible departures of daily anomalies from normality (i.e., nonzero skewness) were outlined using L-moment shape statistics, which are more severe for TN than TX data. (4) Linear trends in the index time series for warm and cold moderate extremes provided clear evidence of a pronounced warming of distribution tails in the latest decades, more marked for TX than TN, which follows a moderate (and generally not statistically significant) cooling or stationary phase.
 A theoretical scheme was finally suggested to understand observed changes in the warm and cold tails of daily anomaly distributions in the light of the above results. Specifically, exploiting the proper (fitted to the data) evolution law of the location parameter, we provided a realistic representation of the temporal evolution of exceedance probabilities. Expected probabilities were explicitly evaluated with the help of a parametric model for the underlying anomaly density functions, devised to account for the observed left skewness of daily data. Consistency between the expected and observed exceedance probabilities highlights the role of the inherent nonlinearity between variation in the mean and probability of extremes and strongly suggests that the observed temporal evolution of the warm and cold moderate extremes can indeed be explained by a rigid, nonuniform translation of the anomaly density functions.
 Following Hosking , the rth L-moment of a random variable X is given by
where () denotes the binomial coefficient, and EXr−k:r is the expectation of the (r − k) th smallest element in a random sample of size r drawn from the distribution of X. The L-moments λr, r = 1, 2,… exist if (and only if) the random variable X has a finite mean [Hosking, 1990]. In that case the main features of the density function are specified by the first four moments.
 The L-location λ1 is the usual mean, whereas the L-scale λ2 is a measure of dispersion of the random variable X, which replaces the ordinary standard deviation. Both of them measure the difference between two randomly selected values, but the latter assigns relatively more weight to the extreme sample values as compared to λ2. The scale-free analogues of λ3 and λ4, namely the L-moment ratios τr = λr/λ2, r = 3, 4 may be regarded as measures of skewness and kurtosis, respectively. Unlike the conventional moment-based coefficients which can take arbitrarily large values, the former are bound, namely ∣τr∣ ≤ 1 regardless of the distribution. Further details and applications can be found in Hosking , Vogel and Fennessey , Hosking , and references therein.
 Unbiased estimators r of equation (A1), based on an observed data set of size n, are given by the corresponding linear combinations of the ordered data averaged over all subsamples of size r, which can be constructed from the observed sample (U statistics, see, e.g., Vogel and Fennessey ). Hence, the rth sample L-moment explicitly reads [Hosking, 1990]
where bk = xi:n, and x1:n ≤ x2:n ≤ …≤ xn:n denote the ordered data.
 As discussed in section 3.1, departures of the analyzed daily temperature distributions from a normal curve mainly originate from lack of symmetry. Thus, to avoid unrealistic assumptions, left skewness of the data must be taken into account. This may be done by modeling observations with the univariate skew-normal density function, which is a simple three-parameter extension of the Gaussian density, allowing for nonzero skewness [see Azzalini and Capitanio, 1999; Azzalini, 2005; Genton, 2004, and references therein]. This distribution has a reasonable flexibility in data fitting, while preserving the mathematical tractability inherited from the normal density. The skew-normal density can be expressed in terms of standard normal density ϕ and its cumulative distribution Φ:
where α is a real coefficient which determines the shape of the density function, and factor 2 provides the proper normalization constant for any choice of α. The density function is left (right) skewed for α < 0(α > 0), and the standard normal is recovered for α = 0. The usual transformation x = ξ + ωz generates a three-parameter class of distributions by adding a location (ξ) and a scale (ω) parameter (see equation (1)), related to the ordinary mean and standard deviation by
Reliability of the density (B1) in reproducing daily temperature anomalies was tested against the normal density. Both densities were fitted to the same data samples, separately for each year of the record period and for each TX and TN series. The goodness of fit was assessed by the χ2 test with a rejection level of 0.10. We found that the skew-normal distribution was not rejected in more than 70% of the analyzed cases (78% for TX and 72% for TN), whereas this percentage was significantly lower for the pure normal distribution (57% for TX and 52% for TN); see Figure B1.
 Hence, explicit consideration of skewness provides a marked improvement to the quality of the fit despite the smallness of the asymmetries (the value α ≈ −1 is found on average for the fitted shape parameter). Note that in cases when the anomaly distributions exhibit a close Gaussian behavior, the shape parameter α approaches zero and the normal density is recovered. By way of illustration, we report in Figure B1 the fitted normal and skew-normal densities, compared to the corresponding empirical distribution, for a randomly selected data sample in the middle of the analyzed period (1985). Data were extracted from the TX-IT series, and a 3 year window was used to enlarge the sample size. Similar results can be obtained for any of the analyzed cases.
 This study was carried out in the framework of the DTA-CMCC agreement “Sviluppo e verifica di tecniche di downscaling e calibrazione di modelli idrologici, sulla base di una griglia termopluviometrica ad altissima risoluzione per gli ultimi 150 anni, per la valutazione dell'impatto dei cambiamenti climatici” and the CMCC/ISAC-CNR agreement “Validazione e downscaling di scenari prodotti con modelli climatici attraverso l'utilizzo di una griglia di variabili meteorologiche ad altissima risoluzione.” The authors thank EU-COST-ACTION ES0601 “Advances in homogenization methods of climate series: an integrated approach (HOME).” Data were kindly provided by the Italian Air Force.