Skip to main content
ARS Home » Plains Area » Temple, Texas » Grassland Soil and Water Research Laboratory » Research » Publications at this Location » Publication #234504

Title: Modifying goodness-of-fit indicators to incorporate both measurement and model uncertainty in model calibration and validation

Author
item Harmel, Daren
item SMITH, PATRICIA - TEXAS A&M UNIVERSITY
item MIGLIACCIO, KATI - University Of Florida

Submitted to: Transactions of the ASABE
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 2/15/2010
Publication Date: 2/15/2010
Citation: Harmel, R.D., Smith, P.K., Migliaccio, K.W. 2010. Modifying goodness-of-fit indicators to incorporate both measurement and model uncertainty in model calibration and validation. Transactions of the ASABE. 53(1):55-63.

Interpretive Summary: Because of numerous practical implications of uncertainty in measured data and model predictions, improved techniques are needed to analyze and understand uncertainty and incorporate it into hydrologic and water quality evaluations. In the present study, correction factors were developed to incorporate measurement and model uncertainty in evaluation of model goodness-of-fit (predictive ability). The correction factors, which were developed for pairwise comparisons of measured and predicted values, modify the typical error term calculation to consider both sources of uncertainty. The correction factors were applied with common types and levels of uncertainty (+/-10-100%) for measured and predicted values from five case studies. The modifications resulted in minimal changes in goodness-of-fit for case studies with very good and very poor model performance, which is both logical and appropriate because very good model performance should not improve further and poor model performance should not be judged as satisfactory when uncertainty is considered. In contrast, incorporating uncertainty in case studies with fair (moderate) performance resulted in important changes in goodness-of-fit conclusions. Because uncertainty affects the appropriateness of model performance conclusions, a model evaluation aid based on uncertainties in measured data and model predictions was developed. This aid suggests that if measured data are highly uncertain, then model performance cannot be appropriately judged because the standard of comparison is itself uncertain. In cases with high prediction uncertainty and low measurement uncertainty, model accuracy can be appropriately evaluated based on indicator values in spite of high prediction uncertainty. If data and model uncertainty are low, models can then be appropriately evaluated with common goodness-of-fit indicators.

Technical Abstract: Because of numerous practical implications of uncertainty in measured data and model predictions, improved techniques are needed to analyze and understand uncertainty and incorporate it into hydrologic and water quality evaluations. In the present study, correction factors were developed to incorporate measurement and model uncertainty in evaluation of model goodness-of-fit (predictive ability). The correction factors, which were developed for pairwise comparisons of measured and predicted values, modify the typical error term calculation to consider both sources of uncertainty. The correction factors were applied with common distributions and levels of uncertainty (+/-10-100%) for measured and predicted values from five case studies. The modifications resulted in inconsequential changes in goodness-of-fit for case studies with very good and very poor model performance, which is both logical and appropriate because very good model performance should not improve further and poor model performance should not be judged as satisfactory when uncertainty is considered. In contrast, incorporating uncertainty in case studies with fair (moderate) performance resulted in important changes in goodness-of-fit conclusions. Because uncertainty affects the appropriateness of model performance conclusions, a qualitative model evaluation matrix based on uncertainties in measured data and model predictions was developed. This matrix suggests that if measured data are highly uncertain, then model performance cannot be appropriately judged because the standard of comparison is itself uncertain. In cases with high prediction uncertainty and low measurement uncertainty, model accuracy can be appropriately evaluated based on indicator values in spite of high prediction uncertainty. If data and model uncertainty are low, models can then be appropriately evaluated with quantitative goodness-of-fit indicators.