Skip to main content
ARS Home » Northeast Area » Beltsville, Maryland (BARC) » Beltsville Agricultural Research Center » Hydrology and Remote Sensing Laboratory » Research » Publications at this Location » Publication #339288

Research Project: Leveraging Remote Sensing, Land Surface Modeling and Ground-based Observations ... Variables within Heterogeneous Agricultural Landscapes

Location: Hydrology and Remote Sensing Laboratory

Title: Improving spatial-temporal data fusion by choosing optimal input image pairs

Author
item Xie, Donghui - Beijing Normal University
item Gao, Feng
item Sun, L. - US Department Of Agriculture (USDA)
item Anderson, Martha

Submitted to: Remote Sensing
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 7/18/2018
Publication Date: 7/19/2018
Citation: Xie, D., Gao, F.N., Sun, L., Anderson, M.C. 2018. Improving spatial-temporal data fusion by choosing optimal input image pairs. Remote Sensing. 10:1142. https://doi.org/10.3390/rs10071142.
DOI: https://doi.org/10.3390/rs10071142

Interpretive Summary: Remote sensing data have been widely used in monitoring vegetation and crop conditions. To map crop progress and conditions at field scales, high spatial and temporal resolution remote sensing data are required. However, remote sensing data from a single sensor cannot satisfy the requirement at present. Data fusion approach has been developed to fuse remote sensing imagery from Landsat and MODIS instruments. This paper assesses different data sources used in data fusion. The errors and uncertainties of data fusion were analyzed over an agricultural and a naturally vegetated area in central U.S. Recommendations are provided for further improving data fusion by choosing optimal sensor combinations. Data fusion approach provides an effective method to generate high spatial and temporal resolution remote sensing data for crop growth and water use monitoring which is required by the National Agricultural Statistics Service and Foreign Agricultural Service for crop yield estimation.

Technical Abstract: Spatial and temporal data fusion approaches have been developed to fuse imagery from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS), which have complementary spatial and temporal sampling characteristics. The approach relies on using Landsat and MODIS image pairs that were acquired on the same day to estimate Landsat-scale reflectances on other MODIS dates. Previous studies have revealed that the accuracy of data fusion results partially depend on the input image pair used. The selection of the optimal image pair has not been fully evaluated. This paper assesses the impacts of Landsat-MODIS image pair selection in data fusion using the Spatial and Temporal Adaptive Reflectance Model (STARFM) over different landscapes. The MODIS images from Aqua and Terra platforms were paired with images from the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI) for the evaluation. Results show that the MODIS pair image with smaller view zenith angles produced better predictions. As expected, the image pair closest to the prediction date produced better prediction results. For prediction dates distant from the pair date, the predictability depends on the temporal and spatial variability of land cover type and phenology. The prediction accuracy for forests is higher than crops in our study areas. The Normalized Difference Vegetation Index (NDVI) for crop was overestimated during the non-growing season when using and input image pair from the growing season, while NDVI was underestimated during the growing season slightly when using and image pair from the non-growing season. Two automatic pair selection strategies were evaluated. Results show that the strategy of selecting the MODIS pair date image that most highly correlates with the MODIS image on the prediction date produced more accurate predictions than the near date strategy. This study demonstrates that data fusion results can be improved if appropriate image pairs are used.