Submitted to: Catena
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 10/31/1997
Publication Date: N/A
Citation: Interpretive Summary: Soil erosion models are used by conservation planners, engineers, and land users in general to estimate soil erosion rates for different combinations of soil type, climate, slope steepness, slope length, and land use. These models have nearly always shown a bias in estimating soil loss when they are tested on measured erosion data. They all have a tendency to estimate erosion values which are too great when the measured erosion rates are low, and they tend to estimate too low when measured rates are greater. This has long been recognized, but never explained. Various and complex explanations have been suggested which often entail theoretical discussions of model structure, linearity, and other factors, but essentially the trend is observed despite the type of model or data set which is used. This paper introduces the hypothesis that the apparent observed bias for soil erosion models, and for hydrologic models in general, is simply due to the fact that the models are deterministic. That is, they produce a single value of predicted erosion for a specific combination of erosion factors. In nature, however, erosion is highly variable. A given set of environmental conditions will be modeled in a specific manner with a single erosion prediction, whereas in nature the erosion for a specific set of measureable environmental factors will produce a range of erosion rates. This study shows the perceived problem of bias in erosion prediction is simply a non-intuitive, mathematical, artifact rather than an inherent problem with soil erosion or hydrologic models per se. This has significant implications for conservation planning and for the use of erosion models.
Technical Abstract: Evaluation of various soil erosion models with large data sets have consistently shown that these models tend to over-predict soil erosion for small measured values and under-predict soil erosion for larger measured values. This trend appears to be consistent regardless of whether the soil erosion value of interest is for individual storms, annual totals, or average annual soil losses, and regardless of whether the model is empirical or physically-based. The hypothesis presented herein is that this phenomenon is not necessarily associated with bias in model predictions as a function of treatment, but rather with limitations in representing the random component of the measured data within treatments (i.e., between replicates) with a deterministic model. A simple example is presented which shows how even a "perfect" deterministic soil erosion model exhibits bias relative to small and large measured erosion rates. The concept is further tested and verified on a set of 3007 measured soil erosion data pairs from storms on natural rainfall and runoff plots using the best possible, unbiased, real world model, i.e., the physical model represented by replicated plots. The results of this study indicate that the commonly observed bias in erosion prediction models relative to over-prediction of small and under- prediction of large measured erosion rates on individual data points is normal and expected if the model is accurately predicting erosion rates as a function of environmental conditions, i.e., treatments.