Skip to main content
ARS Home » Northeast Area » Beltsville, Maryland (BARC) » Beltsville Agricultural Research Center » Hydrology and Remote Sensing Laboratory » Research » Publications at this Location » Publication #343797

Research Project: Improving Agroecosystem Services by Measuring, Modeling, and Assessing Conservation Practices

Location: Hydrology and Remote Sensing Laboratory

Title: A hybrid color mapping approach to fusing MODIS and Landsat images for forward prediction

Author
item Kwan, C. - Collaborator
item Budavari, B. - Collaborator
item Gao, Feng

Submitted to: Remote Sensing
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 3/24/2018
Publication Date: 3/26/2018
Citation: Kwan, C., Budavari, B., Gao, F.N. 2018. A hybrid color mapping approach to fusing MODIS and Landsat images for forward prediction. Remote Sensing. 10:520. https://doi.org/10.3390/rs10040520.
DOI: https://doi.org/10.3390/rs10040520

Interpretive Summary: Daily monitoring of crop conditions requires high resolution remote sensing imagery in both time and space. This requirement cannot currently be satisfied by any single Earth-observing sensor. The data fusion approach has been developed to combine remote sensing images from multiple sensors. This paper presents a hybrid color mapping method to fuse Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The new data fusion model takes one pair of Landsat and MODIS images. Comparing to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Flexible Spatiotemporal DAta Fusion (FSDAF) approaches, this approach achieves comparable or better fusion results and demonstrates higher computing efficiency. The new data fusion approach provides an alternative solution for integrating remote sensing data from different satellite sources for monitoring crop conditions at the field scale that is required by the National Agricultural Statistics Service and the Foreign Agricultural Service for more accurate yield assessments and predictions.

Technical Abstract: We present a new, simple, and efficient approach to fusing MODIS and Landsat images. It is well known that MODIS images have high temporal resolution and low spatial resolution whereas Landsat images are just the opposite. Similar to earlier approaches, our goal is to fuse MODIS and Landsat images to yield high spatial and high temporal resolution images. Our approach consists of two steps. First, a mapping is established between two MODIS images where one is at an earlier time, t1, and the other one is at the time of prediction, tp. Second, this mapping is applied to map a known Landsat image at t1 to generate a predicted Landsat image at tp. Similar to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Flexible Spatiotemporal DAta Fusion (FSDAF) approaches, only one pair of MODIS and Landsat images is needed for prediction. Experiments using actual Landsat and MODIS images demonstrated that the new approach achieves comparable or better fusion performance, using six performance metrics, than that of STARFM and FSDA.