Submitted to: Agricultural and Forest Meteorology Conference Proceedings
Publication Type: Proceedings
Publication Acceptance Date: 8/18/2000
Publication Date: N/A
Technical Abstract: Data quality control for observations from automated weather stations (AWS) is a critical part of a routine operation. This paper reviews the experience of screening almost a decade long period of record for several stations via several automated and graphical procedures. The basis for the routines can be from nearby long term climatic record, physical theory, a second nearby station, and/or instrument specifications. The automated procedures are mostly dynamic adaptations of standard quality control methods for hourly and daily data. Minimally range limits and rate-of-change limits, both of which can be cast graphically, are possible for most meteorological data. Recording the maintenance history and values of regular independent sensor checks are highly recommended practices. Questionable vapor pressure values at or near saturation are the most common problem. An hourly solar radiation rule applied to a record at one location revealed a siting problem. Climatic based rules are best developed from quality databases like SAMSON. Bias problems were found with solar radiation data from two experiment stations. Climatic based rules are preferable to sensor based rules. Combining automated data processing rules with exploratory graphics works better than just using either procedure alone. Paneling graphs of related data is generally very insightful. Sensor drift detection requires an independent and regular record from either another close by station or independent instrument observations recorded at the site in question. Data processing rules are only one part of trying to assure and assess the quality of AWS data but their routine use will result in a more reliable record as well as help with data exploration, further analysis, and modeling.