SEARCH

SEARCH BY CITATION

Keywords:

  • Mesonet;
  • quality control tests;
  • quality flag;
  • Turkey

ABSTRACT

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The Western Turkey Mesonet was initiated in 2002 as part of the Turkey Emergency Flood and Earthquake Recovery project. Its main goal is to provide agricultural and meteorological data not only to support flood forecasting/warning applications but also meteorological uses such as nowcasting and/or short-range forecasting. Currently, it is operational in the western part of Turkey at 206 sites: 4 groups with different types of parameter configuration, providing observations at either 1 or 10 min periods. The observations are transferred to the Turkish State Meteorological Service headquarters by one of the communication technologies available at the site: VSAT (Very Small Aperture Terminal), GPRS (General Packet Radio Service) or ADSL (Asymmetric Digital Subscriber Line). The quality control of the mesonet data is performed monthly in two ways, by an automated quality control test run and through manual quality control checks. Automated quality control tests (range, step, persistence, like-instrument and spatial) are run using not only spatial (site to site) but also temporal (month to month) varying thresholds. The confidence level of the observation quality is classified by one of the flag types ‘good’, ‘suspicious’ or ‘bad’ with respect to the test applied. Observations noted as ‘suspicious’ or ‘bad’ are cross checked in the manual quality control check stage and either confirmed as ‘bad’ or updated to ‘good’. The thresholds of the quality control tests for the corresponding sites are also updated where needed.


1. Introduction

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The stochastic character of the atmosphere is the main limitation in examining and forecasting atmospheric phenomena accurately. It is aimed to overcome such complexity by observing the atmosphere at a sufficient number of observation sites with different types of sensors and capabilities. Vast amounts of data are available from these networks for internal use and/or international exchange, where quality control (QC) procedures become essential. The World Meteorological Organization (WMO) introduced the basic characteristics and general principles of the data quality control (WMO, 2010a), and expectations from the member National Meteorological and Hydrometeorological Services are discussed in WMO (2010b). The members are invited to perform QC tests at observation sites and in data collection centres. Plausible value, plausible rate and internal consistency checks are among the QC tests recommended to be applied at a site. On the other hand, the extended QC tests to be performed in the data collection centres are listed as the plausible value, time consistency (e.g, step and persistence test) and internal consistency checks (WMO, 2010b).

Various QC tests are offered in the literature to investigate the legitimacy of the observations. Among those, threshold method and step change are used to detect the potential outliers for the single station dataset (Wade, 1987; Meek and Hatfield, 1994; Eischeid et al., 1995; You et al., 2007). Multiple station use in QC procedures is offered through the use of a spatial test to compare the station's data against the data from neighbouring stations (Gandin, 1988; Reek et al., 1992; Eischeid et al., 2000; Feng et al., 2004; You and Hubbard, 2006). The estimate of the measurement at the station of interest is performed by various approaches such as multiple regression (Eischeid et al., 2000) and the bivariate linear regression test (Hubbard et al., 2005).

A complex QC system with a higher complexity of consistency checks and flagging procedures referring to a variety of confidence levels was first proposed by Gandin (1988) for radiosonde data. The same system was extended to daily (Reek et al., 1992; Kunkel et al., 1998) and hourly (Meek and Hatfield, 1994; DeGaetano, 1997) surface meteorological data. Shafer et al. (2000) introduced a detailed decision making procedure for the real-time Oklahoma mesonet dataset. The predefined QC tests are applied for consistency checks in temporal and spatial domains. The confidence level with respect to the test applied is presented by the assigned flag types. The same QC system is adapted for the West Texas Mesonet (Sönmez and Doggett, 2003; Schroeder et al., 2005; Sönmez et al., 2005) with a slightly different flagging procedure. Recently, Graybeal et al. (2004) introduced a complex QC algorithm for historical hourly meteorological data. The controls for limits, internal consistency and temporal consistency are the three components of the QC tests performed. A decision tree was developed based upon the quality flag counts and severity level, the latter dependent upon the flag type.

In the present study, the complex QC system developed for the Western Turkey Mesonet (WTM) is introduced. The consecutive steps of automated QC tests run and manual QC checks are presented as part of the operational QC system in the Turkish State Meteorological Service (TSMS) headquarters. The automated QC test applications of range, step, persistence, like-instrument and spatial tests, with corresponding quality flag assignments, are described. The final quality flag update procedure with site to site and monthly varying QC tests' threshold updates are described in the manual QC check phase.

2. Western Turkey Mesonet

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

A mesoscale monitoring network establishment was initiated in 2002 as part of the Turkey Emergency Flood and Earthquake Recovery (TEFER) project, aiming to establish a reliable hydrological and meteorological network for flood forecast and flood warning in the western part of Turkey (Hakyemez, 2007). The complete system of the TEFER project consists of 3 C band Doppler radars, 206 automated weather observing sites (AWOS) and 129 hydrometric stations (Keskin, 2007). The AWOS network and radars are operated and maintained by TSMS, while the hydrometric stations are under the responsibility of the State Hydraulic Works. The main role of the AWOS network in the project is to provide atmospheric/agricultural observations on the mesoscale domain and support the flood forecasting/warning process for the area of interest, as well as to support meteorological applications (Keskin, 2008). During 2002–2004, 129 hydrometric stations as well as the 3 weather radars and 206 AWOS sites (Western Turkey Mesonet, WTM, hereafter) were installed (Keskin and Einfalt, 2008). Since then, the Western Turkey Mesonet (WTM) is operational, providing real time meteorological and agricultural data to support flood forecasting/warning and also providing data for research, education and the protection of life and property.

2.1. Site selection procedure for the WTM

There are seven climate regions in Turkey (e.g., Türkeş et al., 2002; Ünal et al., 2003). On the other hand, a varying number of micro-climate clusters also exist in each climate region (Sönmez and Kömüşcü, 2011). Considering this fact, the site location selection criterion for the WTM is to have at least one site in each micro-climate region. The accessibility of a possible site location for maintenance purposes is also considered during the site location selection process. In total, 206 sites are installed in the western part of Turkey, corresponding to approximately one site in every 28 km2. The WTM network consists of four groups with different types of parameter configuration. The layout of the WTM is provided in Figure 1. Seven of the WTM sites are positioned in the eastern part of Turkey for agricultural purposes.

image

Figure 1. Geographical distribution of the 206 WTM sites with respect to group types. This figure is available in colour online at wileyonlinelibrary.com/journal/met

Download figure to PowerPoint

2.2. Parameters observed

A total of 13 parameters (10 of which are meteorological and 3 agricultural) is observed in the WTM. Among those, some parameters are observed at varying heights. The main distinction between the parameters is the observation period. Seven out of 10 parameters are observed and reported every minute while the observation period is 10 min for the rest. The list of the parameters with specific features is provided in Table 1, while sensor specifications of the parameters are listed in Table 2.

Table 1. Parameters observed in the Western Turkey Mesonet (WTM) with respect to the observation period
PeriodParameterHeight (m)IDUnit
  1. a

    Volumetric water content.

1 minAir temperature2TAIR °C
 Relative humidity2RHUM
 Wind speed10WSPDm s−1
 Wind direction10WDIR°
 Rainfall1RAINmm
 Global solar radiation2SRGLwatt m−2
 Direct solar radiation1.5SRDRwatt m−2
10 minTerrestrial temperature0.05TTER °C
 Local air pressure1.5PRESmb
 Soil temperature− 0.05, − 0.1, − 0.2, − 0.5, − 1ST05, ST10, ST20, ST50, ST100 °C
 Soil moisture− 0.2SMOIVWCa
 Open screen temperature1, 2TOS1, TOS2 °C
 Open screen humidity1, 2RHO1, RHO2
Table 2. Sensor specifications for the parameters observed in the Western Turkey Mesonet (WTM)
SensorModelMin/maxSensor accuracyData resolution
  1. a

    Per minute.

  2. b

    Varies depending on the soil type.

TAIR/TOSsRotronic-MP 101A T5 W4W− 40/+ 60 °C ± 0.3 °C0.1 °C
RHUM/RHOsRotronic-Hygromer C940/100 ± 1% RH0.1%
WSPDLastem-DNA 0020/60 m s−1 ± 0.10.1 m s−1
WDIRLastem-DNA 0010/360 ± 1%
RAINLastem-DQA0310/12 mma ± 1%0.2 mm
SRGLKipp and Zonen-CM110/4000 W m−2 ± 10 W m−20.1 W m−2
SRDRKipp and Zonen-CH1-NIP0/4000 W m−2 ± 0.2%0.1 W m−2
PRESDruck-RPT20035/3500 mb0.02%0.1 mb
STs/TTERLastem-LSI DLA 400 PT100− 200/+ 85 °C ± 0.3 °C0.1 °C
SMOICampbell-CS615VWCb ± 2%0.1 VWC

There are also groups of estimated parameters in the WTM that are generated by using the original observations either observed with 1 or 10 min periods. For instance, the diffuse solar radiation parameter is estimated by using global and direct solar radiation observations and reported every minute. Average wind and total solar radiation parameters are also among the estimated parameters reported, together with 10 min period parameters derived from the values reported every minute. Other estimated types are the daily parameters with their daily average, minimum, and/or maximum values. Parameter configurations with respect to the group types are provided in Table 3.

Table 3. Parameter configuration in the Western Turkey Mesonet (WTM) with respect to the group type
ParameterGroup 1Group 2Group 3Group 4
Air temperatureXXXX
Relative humidityXXXX
Wind speed/directionXXXX
RainfallXXXX
Local air pressureXXX
Soil temperaturesXXX
Soil moistureXXX
Terrestrial temperatureXX
Global/direct solar radiationX
Open screen temperaturesX
Open screen humiditiesX
Total site number20156264

2.3. Communication

The Telecommunication Division at the headquarters of TSMS in Ankara is in charge of data transfer from the 206 WTM sites. The data are backed up in the data logger and transmitted to the headquarters in real-time. This is performed by means of one of the following communication technologies: VSAT (Very Small Aperture Terminal) using TURKSAT 1C Satellite, GPRS (General Packet Radio Services) or ADSL (Asymmetric Digital Subscriber Line), whichever is available at the site. Data transfer is synchronized with the observation period of the parameter so that data from each observation are transferred to TSMS either every 1 or 10 min. Within TSMS, the data are then transferred to the Information Technology and Statistic Division for archival purposes.

3. Automated quality control tests

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The high volume of WTM data requires a comprehensive QC system in order to provide the necessary information about data quality. The automated QC test run is the first step of the operational QC system at TSMS. The variety of QC tests is run to provide distinct information about the confidence level of the data quality to be used in the second phase, the so-called manual QC checks. Due to the network configuration similarities, the QC tests applied for the Oklahoma Mesonet (Shafer et al., 2000) and West Texas Mesonet (Schroeder et al., 2005) are taken into account with specific modifications explained in this section. The range, step, persistence, like-instrument and spatial tests are implemented separately as part of the automated QC test run. Rather than altering the original data, one of the quality flag types provided in Table 4 is assigned to indicate the confidence level with respect to the test applied.

Table 4. Flag types assigned in quality control tests
Flag codeFlagDescription
NNot runThe test is not run yet
GGoodThe observation is good enough to pass the test
SSuspiciousThere is concern about accuracy of observation
BBadThere is a significant evidence that observation is unstable
NANot appliedThe test is not applied due to an unmet requirement

Three main confidence levels are planned to be assigned to the data quality: ‘good’, ‘suspicious’ and ‘bad’. The flag ‘good’ refers to data that have successfully passed the predefined QC test. The ‘suspicious’ flag indicates questionable data and that there is a concern about the data quality. The ‘bad’ flag points out unstable data with respect to the predefined QC test criteria. As for the first step of the QC procedure, the QC flag of the observation is initialized for the considered test with ‘Not Run’. Depending on the test control result, the confidence level of the observation is flagged as either ‘good’, ‘suspicious’ or ‘bad’. If the test cannot be performed for some reason, such as system problems, then the QC flag is assigned to be ‘Not Applied’. Each test is run separately for the same observation and a separate flag from each test is assigned accordingly. The group of quality flags for the same observation aims to provide as much independent information as possible in order to support the final decision about the data quality.

The other distinction of the QC tests defined for the WTM rather than the Oklahoma and West Texas Mesonet, is the determination of the QC test thresholds. The QC tests for both mesonets use static thresholds for the entire network (Shafer et al., 2000; Schroeder et al., 2005). Furthermore, the possible temporal variations of the thresholds are not taken into account. The static threshold approach is very coarse for the western Turkey territory, where a varying number of microclimate regions exist (Sönmez and Kömüşcü, 2011). For instance, the minimum temperature in the province of Konya (in middle inner part of Turkey) for January is − 28.2 °C, while it is − 1.6 °C for the Fenike province (in the southwest part of Turkey) for the same month. In addition, the monthly variation of the same parameter for each site is considerably high. It is clear that QC tests with static thresholds will definitely be satisfactory for some sites but will perform more poorly for most. As such, each of the automated QC tests applied for the WTM uses thresholds which vary not only monthly but also from site to site. The details of the QC tests are described as follows.

3.1. Range test

The range test is used to check if the observation falls within the acceptable range, where the upper and lower limits correspond to the climatological records and/or instrumentation limitations. The test control for the parameter k and the corresponding flag information at time step t is provided in Table 5. Any observation greater/smaller than the upper/lower limit is definitely questionable, but does not necessarily have to be flagged as ‘bad’ since the same observation might be referring to a new climatological record case. For this reason, a Delta variable is introduced in the controls (Table 5) in order to define a ‘suspicious’ flag zone between the ‘good’ and ‘bad’ flag types. The − Delta and + Delta amounts used in the test are provided for each parameter in Table 6. The same Delta amounts are used on a monthly basis and for each station. Varying − Delta and + Delta amounts are assigned for some parameters because of specific constraints. For instance, minimum wind speed for a site is zero, so that − Delta amount for this parameter is set to zero while the + Delta amount is 4.0 m s−1.

Table 5. Control statements for the range test and corresponding flag types
ControlFlag
If LimitLowerObsk, tLimitUpper thenObsk, t:Good
If equation image or LimitLowerDeltaObsk, tLimitLower thenObsk, t:Suspicious
If equation image or equation image thenObsk, t:Bad
Table 6. The − Delta and + Delta amounts used in range test. See Table 1 for list for acronyms
Parameter-Delta+ Delta
TAIR− 3.53.5
RHUM− 10.010.0
WSPD0.04.0
WDIR0.00.0
RAIN0.01.0
SRGL0.0200
SRDR0.0200
TTER− 5.05.0
PRES− 5.05.0
ST05− 5.05.0
ST10− 3.53.5
ST20− 3.03.0
ST50− 2.02.0
ST100− 1.01.0
SMOI− 5.020.0
TOSs− 3.53.5
RHOs− 8.08.0

The upper and lower limits for a site are mostly obtained by using observations from the climate station located in the same place that the WTM site is deployed. If no climate station existed beforehand, then any climate station nearby (<5 km) is taken into account and limits from that station are assigned, considering elevation corrections whenever needed. In case of the absence of a reasonable nearby station, 6–8 years of WTM site data are used to assign the upper and lower limits. The determined monthly upper and lower limits of the air temperature parameter for the Finike and Konya providences are provided in Table 7. These illustrate the necessity of using thresholds not only varying site to site but also month to month for the range and other QC tests.

Table 7. Monthly variation of the upper and lower limits of the air temperature ( °C) parameter for the Finike and Konya providence used in the range test
SiteFinikeKonya
MonthLimitLowerLimitUpperLimitLowerLimitUpper
1− 1.625.3− 28.217.6
2− 2.224.3− 26.523.8
31.028.0− 16.428.9
43.634.6− 8.634.6
56.938.1− 1.234.6
610.641.01.836.7
713.843.96.040.6
814.142.45.337.8
911.340.8− 3.037.2
106.038.6− 11.031.6
112.831.6− 20.027.0
12− 0.224.4− 26.021.8

3.2. Step test

The step test controls the change over consecutive observations considering that the data are taken at either 1 or 10 min periods, depending on the parameter type. The control procedure for the parameter k in time step t is provided in Table 8 with the corresponding flag information. The Delta parameter is also included in the control of this test due to the possibility of a new record case. The Delta amounts are provided in Table 9 for the parameters used in the step test.

Table 8. Control statements for the step test and corresponding flag types
ControlFlag
If | Obsk, tObsk, t−1 | ≥ Difmax + Deltak thenObsk, t and Obsk, t−1:Bad
If equation image thenObsk, t and Obsk, t−1:Suspicious
If | Obsk, tObsk, t−1 | ≤ Difmax thenObsk, t and Obsk, t−1:Good
Table 9. The Delta amounts used in sp test. See Table 1 for list for acronyms
ParameterDelta
TAIR0.2
RHUM5.0
WSPD2.0
WDIR
RAIN1.0
SRGL40.0
SRDR30.0
TTER1.5
PRES0.5
ST051.0
ST100.3
ST200.2
ST50/ST1000.1
SMOI5.0
TOSs1.0
RHOs10.0

The Difmax parameter in the control statement (Table 8) is estimated for each parameter, using 7–9 years of archived data for the corresponding site on a monthly basis. For the site considered, the cumulated histogram of the differences (CHoD) methodology is employed, following the steps described below:

  1. the dataset of the month of interest is extracted from the archive (7–9 years of data for the same month);

  2. the range test is run for the extracted dataset;

  3. considering only the observations with quality flag of ‘good’ from the range test, the absolute differences of the consecutive observation pairs are determined;

  4. the histogram of counts versus the bin amounts is obtained using the absolute differences;

  5. the cumulative histogram is estimated with respect to the absolute difference;

  6. the absolute difference corresponding the 99.9% amount in cumulative histogram is assigned as the Difmax for the considered month at the corresponding site.

In order to avoid any misrepresentation, only site observations with a quality flag of ‘good’ from the range test are used in the Difmax determination. The range test alone is, however, not sufficient to exclude all the data with poor quality. For this reason, rather than using the maximum difference amount (100%), a 99.9% threshold for the cumulative histogram is used to prevent such a possibility.

3.3. Persistence test

The recurrence of consecutive observations for a particular period is common. On the other hand, the reasonable repetition period is sometimes exceeded because of a damaged instrument or observations stuck at a particular reading. For this reason, the persistence test controls the groups of repeated observations in the temporal domain, without considering the repeated value itself. The test control procedure for the parameter k in time domain is provided in Table 10, together with the corresponding flag information. Depending on the test control results, every observation in the repeating window length (number of the consecutive repeating observations) is assigned with the corresponding flag type.

Table 10. Control statements for the persistence test and corresponding flag types
ControlFlag
If Δk, t ≤ Δmax thenObsk, i:Good(i = 1 to Δ)
If equation image thenObsk, i:Bad(i = 1 to Δ)

The Δk, t is the window length of the repeating observation backward and/or forward in the time domain, while Δmax is the maximum reasonable repeating window length for the same parameter. The Δmax estimation is performed for each site on a monthly basis, using the CHoD methodology in the same manner as above, but using the repeating window length variable instead. The results show that the Δmax parameter variation from site to site is not significant, while monthly variation differs from parameter to parameter. For this reason, monthly Δmax variation amounts provided in Table 11 are used for all sites. It has been noted that the soil moisture parameter has a tendency to stay constant for a long time (sometimes for days) so that persistence test application is cancelled for this parameter.

Table 11. Monthly variation of the Δmax parameter used in persistence test. See Table 1 for list for acronyms
ParameterMonth
 123456789101112
TAIR300300300300300180180180300300300300
RHUM144014401440144014401440144014401440144014401440
WDIR120120120120120120120120120120120120
WSPD303030303030303030303030
RAIN303030303030303030303030
SRGL202020202020202020202020
SRDR303030303030303030303030
TTER100100303030151515303030100
PRES242424242424242424242424
ST05100100100363636363636100100100
ST10200200200505050505050200200200
ST20250250250808080808080250250250
ST50300300300150150150150150150300300300
ST100400400400270270270270270270400400400
TOSs363636363636363636363636
RHOs144144144144144144144144144144144144

3.4. Like-instrument test

Some of the observed parameters are alike in the sense that they refer to similar observations at varying heights such as soil temperature observation at 5 and 10 cm. The like-instrument test is used to compare a pair of similar parameter observations at the same site. This test is applied for the parameter pairs of TAIR-TOS1, TAIR-TOS2, TOS2-TOS1, RHUM-RHO1, RHUM-RHO2, RHO-RHO1, ST05-ST10, ST10-ST20, ST20- ST50, ST50-ST100. The control for this test at time step t is provided in Table 12 for the parameter pairs of u and v. In case of the ‘bad’ flag, parameter u, v, or both, might be the reason for the flag. However, a direct judgement is not possible so that both parameters are flagged as ‘bad’. The Difmax parameter in the control statement is estimated by using the CHoD methodology for each parameter pair on a monthly basis for each site.

Table 12. Control statements for the like-instrument test and corresponding flag types
ControlFlag
If | Obsu, tObsv, t | ≤ Difmax thenObsu, t and Obsv, t:Good
If equation image thenObsu, t and Obsv, t:Bad

3.5. Spatial test

The spatial test controls the validity of an observation by comparing the same observation with the others from the neighbouring sites. Barnes' (1964) objective analysis is used for the QC spatial test applications in Oklahoma and West Texas mesonet (Shafer et al., 2000; Schroeder et al., 2005). This approach is, however, not useful for the western Turkey area due to the complex topography and the resultant strong anisotropy. Alternatively, the spatial test for the WTM is performed by comparing the observation at a site with the observations from the three nearest stations located in the same microclimate environment. The control procedure performed for parameter k at time step t for the central station is defined in Table 13, where x, y and z refers to the location of the three nearby stations assigned. The variables DifXmin and DifXmax refer to the possible minimum and maximum difference between the site considered and the neighbouring site X, obtained by using the CHoD methodology. For each site, the DifXmin and DifXmax amounts are determined on a monthly basis. Currently, the spatial test application for the WTM is performed only for the air temperature and pressure parameters.

Table 13. Control statements for the spatial test and corresponding flag types
ControlFlag
If Dif1min ≤ | Obsk, tObsk, t, x | ≤ Dif1max or 
Dif2min ≤ | Obsk, tObsk, t, y | ≤ Dif2max or 
Dif3min ≤ | Obsk, tObsk, t, z | ≤ Dif3max thenObsk, t:Good
elseObsk, t:Bad

4. Case study

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

Automated QC test results are provided for December 2010 in this part of the study. The range, step, persistence, like-instrument and spatial tests are performed separately for the whole WTM dataset in this period and the corresponding flag types are assigned for each test depending on the controls described. The flag statistics with respect to the tests applied are introduced in Table 14 for some of the selected parameters.

Table 14. Quality control test results covering December 2010 period for the selected parameters (N/A refers either corresponding flag type or the test itself is not valid for the parameter). See Table for list for acronyms
Param.# of obs.Flag typeRange testStep testPersistence testLike-instrument testSpatial test
TAIR9157629G909149191575489157306N/A7442752
  S6346311N/A N/A
  B267570323 1052
  NA000 1713825
RHUM9157629G914161091575769157629N/AN/A
  S1594835N/A  
  B71180  
  NA000  
WSPD9157629G915372991561929152749N/AN/A
  S3062651N/A  
  B8387864880  
  NA000  
RAIN9157629G915750791576249108360N/AN/A
  S1071N/A  
  B15449269  
  NA000  
SRDR842210G841625842187838966N/AN/A
  S2181N/A  
  B367223244  
  NA000  
TTER750713G749487747920748908N/AN/A
  S11902325N/A  
  B364681805  
  NA000  
PRES751058G748048749508751058N/A697217
  S2813220N/A N/A
  B19713300 503
  NA000 53338
ST05746271G737878746270745677746271N/A
  S83701N/AN/A 
  B2305940 
  NA0000 
TOS184211G83765840408421182469N/A
  S15333N/AN/A 
  B29313801742 
  NA0000 
RHO184211G84005842118359884211N/A
  S2060N/AN/A 
  B006130 
  NA0000 

The total number of observations for the parameters with 1 min observation period for December 2010 exceeds 9 million (Table 14). The observation total is less for the other parameters depending on the number of the sites (see Table 3). Each test is successfully performed for each parameter and observations are flagged as ‘good’, ‘suspicious’ and ‘bad’ accordingly. Only some of the observations for the TAIR and PRESS parameters received the ‘Not Applied’ flag from spatial test, due to the absence of one of the three neighbouring sites' observations. Other than that, a varying number of ‘good’, ‘suspicious’ and ‘bad’ flags are observed with respect to the QC tests applied for each parameter, observations with flag type ‘good’ being dominant.

5. Manual QC checks

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The QC system for the WTM is operated at TSMS headquarters on a monthly basis. The QC system performance starts in the first week of each month by investigating the data transfer statistics during the last month. In the case of a detected gap in the dataset, which may be caused due to communication problems, site visits are performed to obtain the missing observations by downloading the backup data from data logger at the site. The automated QC tests are applied in the next step for the whole of the WTM dataset and corresponding QC flags are assigned with respect to each test control.

The last phase of the QC system at TSMS is performed by employing manual QC checks. A control team (containing meteorologists, climatologists and communication engineers) monitors every WTM observation assigned with quality flag of ‘suspicious’ or ‘bad’. The possibility of being a reasonable or unrealistic observation is investigated in detail by performing cross-checks and/or manual controls containing spatial and temporal consistency tests. If the final decision for the observation is unrealistic, then the QC flag of ‘suspicious’ is changed to ‘bad’. Otherwise, the observation is concluded to be reasonable and QC flag is changed to ‘good’ and the QC test threshold for the corresponding site is updated for future use. So, the manual QC checks ensure that observation with ‘suspicious’ or ‘bad’ quality flags are examined and the flag types updated, as well as the QC test thresholds wherever needed.

6. Summary and conclusions

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The Western Turkey Mesonet (WTM) is now in operational use in western Turkey, and it provides agricultural and meteorological observations. A high volume dataset is available with 1 and 10 min observation periods for a variety of end users. The WTM dataset is not only used to support flood forecasting/warning and weather forecasting in the western part of the country, but also to provide valuable opportunities in education and research for the community.

The observations from 206 sites of the WTM are transferred to the Turkish State Meteorological Service (TSMS) data centre by means of VSAT (Very Small Aperture Terminal), GPRS (General Packet Radio Service) or ADSL (Asymmetric Digital Subscriber Line) communication technologies. The quality control (QC) system at TSMS is performed on a monthly basis by employing automated QC tests and manual QC checks. The automated QC tests (range, step, persistence, like-instrument and spatial) are applied to rate the confidence level of the observation by using ‘good’, ‘suspicious’ or ‘bad’ flag types. The observations with ‘suspicious’ or ‘bad’ flag types are taken into account and cross controls are carried out to see if there is significant evidence that the observation is indeed ‘suspicious’ or ‘bad’. These flag types are updated accordingly with ‘good’, or ‘bad’, depending on the control results. The QC test thresholds for the sites are also updated if any ‘suspicious’ or ‘bad’ flag type is updated as ‘good’.

WTM observations are available nationwide to a variety of users through an interface at TSMS (http://www.tumas.dmi.gov.tr) WMT data requests for educational and research purposes are provided free of charge, while some charges apply for commercial use requests. The interface allows extensive query alternatives using the variables such as site, parameter and period. The QC flags are not accessible through the same interface at this point. Besides, QC flags are used internally while delivering the requested data. Only the observations having the ‘good’ quality flag from all QC tests have been filtered and released for the end users since January 2011.

Acknowledgements

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References

The author would like to thank Savaş Köksal, Osman Eskioǧlu and Yusuf Çalık for their kind and valuable support during the operational QC system establishment in Turkish State Meteorological Service.

References

  1. Top of page
  2. ABSTRACT
  3. 1. Introduction
  4. 2. Western Turkey Mesonet
  5. 3. Automated quality control tests
  6. 4. Case study
  7. 5. Manual QC checks
  8. 6. Summary and conclusions
  9. Acknowledgements
  10. References
  • Barnes SL. 1964. A technique for maximizing details in numerical weather map analysis. J. Appl. Meteorol. 3: 396409.
  • DeGaetano AT. 1997. A quality control procedure for hourly wind data. J. Atmos. Oceanic Technol. 14: 308317.
  • Eischeid JK, Baker CB, Karl T, Diaz HF. 1995. The quality control of long-term climatological data using objective data analysis. J. Appl. Meteorol. 34: 27872795.
  • Eischeid JK, Pasteris PA, Diaz HF, Plantico MS, Lott NJ. 2000. Creating a serially complete, national daily time series of temperature and precipitation for the western United States. J. Appl. Meteorol. 39: 15801591.
  • Feng S, Hu Q, Wang W. 2004. Quality control of daily meteorological data in China 1951–2000: a new dataset. Int. J. Climatol. 24: 853870.
  • Gandin LS. 1988. Complex quality control of meteorological observations. Mon. Weather Rev. 116: 11371156.
  • Graybeal DY, DeGaetano AT, Eggleston KL. 2004. Complex quality assurance of historical hourly surface airways meteorological data. J. Atmos. Oceanic Technol. 21: 11561169.
  • Hakyemez N. 2007. TEFER project implementation for improvement and rehabilitation of flood protection infrastructure in the western Black Sea flood area. Proceedings of the International Congress on River Basin Management. Turkish State Hydraulics Works: Antalya; 635645.
  • Hubbard KG, Goddard S, Sorensen WD, Wells N, Osugi TT. 2005. Performance of quality assurance procedures for an Applied Climate Information System. J. Atmos. Oceanic Technol. 22: 105112.
  • Keskin F. 2007. Rainfall forecasting system of TEFER project. Proceedings of the International Congress on River Basin Management. Turkish State Hydraulics Works: Antalya; 460471.
  • Keskin F. 2008. Testing of different meteorological models for flood forecasting in Filyos Basin, Turkey. Third International Conference on Water Observation and Information System for Decision Support, BALWOIS, Ohrid.
  • Keskin F, Einfalt T. 2008. Comparison of MM5 output to adjusted radar rainfall data for hydrological modelling. Proceedings of The 5th European Conference on Radar in Meteorology and Hydrology, ERAD, Helsinki; ISSN 978-951-697-676-4.
  • Kunkel KE, Andsager K, Conner G, Decker WL, Hillaker HJ Jr, Knox P, Nurnberger FV, Rogers JC, Scheeringa K, Wendland WM, Zandlo J, Angel JR. 1998. An expanded digital daily database for climatic resources applications in the Midwestern United States. Bull. Am. Meteorol. Soc. 79: 13571366.
  • Meek DW, Hatfield JL. 1994. Data quality checking for single station meteorological databases. Agric. For. Meteorol. 69: 85109.
  • Reek T, Doty SR, Owen TW. 1992. A deterministic approach to the validation of historical daily temperature and precipitation data from the Cooperative Network. Bull. Am. Meteorol. Soc. 73: 753765.
  • Schroeder JL, Burgett WS, Haynie KB, Sönmez İ, Skwira GD, Doggett AL, Lipe JW. 2005. The West Texas mesonet: a technical overview. J. Atmos. Oceanic Technol. 22: 211222.
  • Shafer MA, Fiebrich CA, Arndt DS, Fredrickson SE, Hughes TW. 2000. Quality assurance procedures in the Oklahoma mesonet. J. Atmos. Oceanic Technol. 17: 474494.
  • Sönmez İ, Doggett AL. 2003. The West Texas data archive: a tool for wind science and engineering research in West Texas. Proceedings Of the 11th International Conference on Wind Engineering, ICWE/NIST, Lubbock, TX; 20212028.
  • Sönmez İ, Kömüşcü AÜ. 2011. Reclassification of rainfall regions of Turkey by K-Means methodology and their temporal variability in relation to North Atlantic Oscillation(NAO). Theor. Appl. Climatol. 106: 499510.
  • Sönmez İ, Schroeder JL, Burgett WS, Haynie KB. 2005. The enhancement of QA\QC tests for west Texas mesonet wind parameters. The 15th Conference on Applied Climatology/13th Symposium on Meteorological Observations and Instrumentation, JP1.28. American Meteorological Society: Savannah, GA; 2024.
  • Türkeş M, Sümer UM, Kiliç G. 2002. Persistence and periodicity in the precipitation series of Turkey and associations with 500 hPa geopotential heights. Clim. Res. 21: 5981.
  • Ünal Y, Kindap T, Karaca M. 2003. Redefining the climate zones of Turkey using cluster analysis. Int. J. Clim. 23: 10451055.
  • Wade CG. 1987. A quality control program for surface mesometeorological data. J. Atmos. Oceanic Technol. 4: 435453.
  • WMO. 2010a. Manual on the Global Observing System, Global Aspects, Vol. 1, WMO (Series), Vol. 544, 2010 edn. World Meteorological Organization: Geneva, Switzerland.
  • WMO 2010b. Guide to the Global Observing System, WMO (Series), Vol. 488 2010 edn. World Meteorological Organization: Geneva, Switzerland.
  • You J, Hubbard KG. 2006. Quality control of weather data during extreme events. J. Atmos. Oceanic Technol. 23: 184197.
  • You J, Hubbard KG, Nadarajah S, Kunkel KE. 2007. Performance of quality assurance procedures on daily precipitation. J. Atmos. Oceanic Technol. 24: 821834.