Skip to main content
ARS Home » Midwest Area » Columbia, Missouri » Cropping Systems and Water Quality Research » Research » Publications at this Location » Publication #334724

Title: Inferring random component distributions from environmental measurements for quality assurance

Author
item Sadler, Edward
item Sudduth, Kenneth - Ken
item Drummond, Scott
item THOMPSON, ALLEN - University Of Missouri
item CHEN, JIAXUN - University Of Missouri
item Nash, Patrick

Submitted to: Agricultural and Forest Meteorology
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 2/18/2017
Publication Date: 3/6/2017
Citation: Sadler, E.J., Sudduth, K.A., Drummond, S.T., Thompson, A.L., Chen, J., Nash, P.R. 2016. Inferring random component distributions from environmental measurements for quality assurance. Agricultural and Forest Meteorology. 237:362-370. doi: 10.1016/j.agrformet.2017.02.021.

Interpretive Summary: Recent trends in awareness of and appreciation for data of all kinds has brought recognition to the need for data to be accompanied by descriptions of methods, context, and quality, collectively called metadata. The last, data quality, has evolved from simple recitation of manufacturer’s specifications to flag indicators or qualitative terms for each data point, as in good, questionable, or bad. Perhaps more informative would be a confidence interval – instead of a single number, data could be reported as the number with a statement of accuracy, such as a range within which the measurement would be expected to be, as polls often state a percentage plus or minus the margin for error. However, determining the accuracy or confidence for environmental measurements is not simple. Two new test statistics were developed to help place bounds on random error. One requires data from a single sensor at two consecutive times; the other leverages a redundant sensor at the same time. Both provide an estimate of the random variation that could exist around the measurement. Performance of the tools for representative air temperature data was demonstrated and a case study of behavior during a time of measurement failure illustrated how the tools could be built into near-real-time quality assurance programs. Scientists collecting data could benefit from these tools by improved detection of measurement errors, and users of the data could benefit by realistic indicators of how accurate the data are, even if the specific conditions surrounding the measurement might not be known.

Technical Abstract: Environmental measurement programs can add value by providing not just accurate data, but also a measure of that accuracy. While quality assurance (QA) has been recognized as necessary since almost the beginning of automated weather measurement, it has received less attention than the data proper. Most QA systems examine data limits and rate of change for gross errors and examine data for unchanging values. Others compare data from other locations using spatial tools or examine temporal consistency. There exists a need for analytical tools that can increase the likelihood of detecting small errors, such as a calibration drift, or increased variation in a sensor reading. Two such empirical tools are described herein that can inform a first level QA process. One operates on data from a single sensor, using comparisons between a current and a prior datum; the other leverages additional information from a duplicate sensor and operates on only the current datum. The objectives of this paper are to describe the computational methods, illustrate results with multiple-month datasets representing both nominal and failing sensors, provide some indications of validity of assumptions made in the derivation, and suggest where in a quality assurance program these methods could be applied. Both tools exploit an instantaneous measurement and the average for the period ending with the measurement to obtain a current estimate of difference between errors. In the single-sensor tool, the difference is between the current and prior error; in the dual-sensor tool, the difference is between the errors of the two sensors. Distributions of these values for two months of air temperature data provided sample means near zero, and sample standard deviations between 0.16 C and 0.20 C for the single-sensor tool, and between 0.071 C and 0.075 C for the dual-sensor tool. The mean of the sample distribution was independent of solar radiation, wind speed, and the range of temperatures encountered during the period. The variation of the test statistic with those drivers suggested that models of variation might be developed to refine the tools. The narrow range of the test statistics, especially in the dual-sensor tool, offers utility in a QA program. The inter-sensor difference in the dual-sensor tool adds value. Behavior of the test statistics in a period of sensor failure illustrate how they can be added to common QA tests to refine the process. The test statistics are easily programmed in a first-level QA process. With little additional datalogger programming to obtain both the period average and the ending value, these tools could be added to QA toolkits in many automated weather stations.