Printer Friendly

Wetland Change Detection Using Cross-Fused-Based and Normalized Difference Index Analysis on Multitemporal Landsat 8 OLI.

1. Introduction

Wetlands are a unique ecosystem formed by the interaction between water and land, and they cover 6% of the Earth's surface [1]. Due to seasonal changes, the characteristics of wetlands vary among water, soil, and vegetation. This makes the wetland landscape more complex, and it becomes more difficult to extract information about changes in these regions. In addition, the combination of reflectance spectra of the underlying soil, the hydrologic regime, and atmospheric vapor makes optical classification more difficult, and these factors could introduce a reduction in spectral reflectance. Therefore, it is often difficult to achieve the expected results using a single method to extract information about wetland change [2].

Postclassification comparison (PCC), in which two multitemporal images are independently classified and then compared [3], is one of the methods used for wetland change detection. First, it is applied for detecting the trajectories of corresponding wetland cover types. More specifically, it includes many classification methods, such as the regression tree algorithm or the maximum likelihood classification [4,5]. However, in some of these methods, high-accuracy classification and ground truth information are required [3-6].

The image-to-image (or direct) comparison change detection method, which is another method for wetland change detection, is used to obtain a difference image of spectral changes through the analysis and calculation of the spectral characteristics from multitemporal images, and then a binary image is generated in which the change areas are distinguished from unchanged areas [7]. The advantages of this method are first that it provides a faster comparison of images and second that it demands no ground truth information; however, it could not display the change trajectories of wetland cover types [8]. Change detection methods such as change vector analysis (CVA) [9], principal component analysis (PCA) [10], Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) [11], and multivariate alteration detection (MAD) [12] directly calculate multitemporal images.

However, the results of the change detection method based on the difference image largely depend on the spectral characteristics and may include some false positives. For this reason, a change detection method based on cross-fusion and spectral distortion is proposed to improve the accuracy of change detection in flood zones [13]. Successful change detection results have been achieved in coal-mining subsidence areas; nevertheless, there are still some false detections in wetland areas [14].

To mitigate the false positive results, we employ an image-to-image method through NDVI and NDWI extraction based on a cross-fusion image to detect the change information of wetlands. In the case of wetlands, vegetation, soil, and water coexist, and the application of the cross-fusion method is beneficial for improving spatial resolution and enhancing information about wetland change. In addition, based on the cross-fused image, NDVI and NDWI are extracted in order to enhance the information about vegetation and water. We can then derive the change information using the modified IR-MAD algorithm, which is a well-established change detection method for multitemporal multispectral images [12]. Finally, the change area of wetlands is obtained using the automated threshold method.

2. Study Area and Dataset

In this study, we collected three Landsat 8 OLI multitemporal images covering the Shengjin Lake Nature Reserve area. These images represent the seasonal (25 July 2016 and 16 December 2016) and interannual (6 November 2013 and 16 December 2016) changes in the types of land cover, as shown in Figures 1(a)-1(c). In the preprocessing, the relevant bands were selected, including the 30-meter-resolution MS (multispectral) bands 2-7 and the 15-meter-resolution PAN (panchromatic) band 8. Then, the Shengjin Lakeprotected area vector data were used to cut the images, with the specific parameters shown in Table 1.

The Shengjin Lake National Nature Reserve (30[degrees]15,~30[degrees] 30*N, 116[degrees]55,~117[degrees]15,E) is located in Chizhou City, Anhui Province. The protected region, with a total area of 333.40 km2, consists of Shengjin Lake, cultivated areas, urban areas, forest, and bare land. The Shengjin Lake wetland ecological environment is well-preserved, with rich natural and cultural landscapes. It is one of the most complete areas in the wetland ecosystem of inland freshwater lakes in the lower reaches of the Yangtze River. It connects with the Yangtze River, and the water level of the lake is regulated by the Huangpen sluice. The location of the study area is shown in Figure 1(d). The water level of Shengjin Lake varies between 3.4 and 7.4 m due to the sluice. Water level changes lead to the largest lake area in the summer wet season and a smaller area in the winter dry season. During the dry season, two largest Carex meadows ("upper lake meadow" and "lower lake meadow") provide suitable living environments and food sources for Greater White-fronted Geese and Bean Geese, and it has become a critical winter habitat for rare birds [15, 16].

3. Methodology

In this section, we detail the process of extracting wetland change information from bitemporal images using a modified IR-MAD. We consider the two datasets [mathematical expression not reproducible] consisting of high-resolution PAN and low-resolution MS images acquired in the same geographical area at different times [t.sub.1] and [t.sub.2], respectively. The process flow is shown in Figure 2, and the details of each step are described below.

3.1. Cross-Fused Image Generation. Generally, the image fusion method fuses high-resolution PAN images and low-resolution MS images into a high-resolution MS image. In this paper, the PAN and MS images are generally fused using Gram-Schmidt adaptive (GSA) to produce high-resolution multispectral [mathematical expression not reproducible] images. The GSA algorithm is applied as a representative component substitution- (CS-) based fusion algorithm, and there is no limit to the number of bands of the fused image [17]. The major drawback of the CS-based fusion method is spectral distortion, also called color (or radiometric) distortion. This spectral distortion is caused by the mismatch between the spectral responses of the MS and PAN bands according to the different bandwidths [18]. In this study, the spectral distortion will be regarded as a candidate detection feature for wetland cover change [17]. To this end, we extract the high-resolution NIR image instead of high-resolution PAN image. The NIR band has a narrower bandwidth, and by using this band, the mismatch level of spectral response outside the NIR spectral range will increase, and the spectral distortion will become more pronounced [13]. At the same time, the NIR band is a very useful information source for detecting water or vegetation areas, because the water area appears dark since it has strong absorption characteristics, and the vegetation has the opposite characteristics and appears light. Thus, the NIR band is useful in extracting change information. Then, the cross-fused image [CF.sup.H.sub.1] is generated where the [mathematical expression not reproducible] MS image is fused with the NIR band of [mathematical expression not reproducible] by using the GSA image fusion algorithm. [CF.sup.H.sub.2] can be obtained in the same way as [CF.sup.H.sub.1], and the formula is as follows:

[mathematical expression not reproducible], (1)

[mathematical expression not reproducible]. (2)

3.2. NDVI and NDWI Extraction. The NDVI, which is a classic index used to monitor vegetation changes, is calculated from a normalized transform of the NIR and red reflectance ratio. Application of the NDVI strives to minimize the solar irradiance and soil background effects and improve the vegetation signal. Similarly, the NDWI is a commonly used remote sensing water monitoring index [19, 20].

The cross-fused image has high spatial and temporal resolution, and the vegetation and water details of the bitemporal images are preserved. By using the NDVI and the NDWI, the change information from water and vegetation is further optimized, and the distinction among water bodies, wet soil, and vegetation is improved. The spectral difference is enhanced twofold, and the detection sensitivity of the vegetation and water increases. The equation is as follows:

[NDVI.sub.F] = [[NIR.sub.F] - [R.sub.F]]/[[NIR.sub.F]+[R.sub.F]] (3)

[NDWI.sub.F] = [green.sub.F]-[NIR.sub.F]/[green.sub.F]+[NIR.sub.F], (4)

where [NIR.sub.F], [R.sub.F], and [green.sub.F] are the NIR, red, and green bands of the cross-fused image, respectively. According to (3) and (4), the [mathematical expression not reproducible] images are generated from the cross-fused images [CF.sup.H.sub.1] and [CF.sup.H.sub.2] and include the NDVI and NDWI bands.

3.3. Wetland Change Area Extracted. The IR-MAD algorithm, which is based on canonical correlation analysis, considers two K band multispectral images R and T of the same area, but acquired at two different times, and it has an important application in the field of multitemporal multispectral image change detection. The random variables U and V, generated by any linear combination of the intensities of the spectral bands using coefficient matrices a and b, are defined. The equation is as follows [21].

U = [a.sup.T]R,

V = [b.sup.T]T, (5)

where the superscript T is the transpose of each matrix. The task is to find suitable vectors a and b by maximizing the variance of U - V. This leads to solving two generalized eigenvalue problems for a and b from a canonical matrix analysis.

The MAD variate [M.sub.k], which is generated by taking the paired difference between U and V, represents the changed information [22], where the equation is as follows (6).

[mathematical expression not reproducible] (6)

In this study, corresponding to the two multitemporal normalized difference index images ([mathematical expression not reproducible]) and two general fused images ([mathematical expression not reproducible]), MAD variate [M.sub.p] is generated by using optimal coefficients a and b through (7) and (8):

[mathematical expression not reproducible] (7)

[mathematical expression not reproducible] (8)

where [M.sub.1] is the MAD variate of the generally fused image with the same temporal data. Meanwhile, two normalized index images ([mathematical expression not reproducible]) in [M.sub.2] are extracted from the cross-fusion image.

The probability of the changed information for pixel j, which is calculated by the sum of the squares of the standardized MAD variate, is defined in (9).

[mathematical expression not reproducible] (9)

where variable [Z.sub.j] represents a weight for the probability of changes in each pixel to identify a greater chi-square value, [M.sub.kj] is the MAD variate of the kth band for pixel j, and [mathematical expression not reproducible]i is the variance of the no-change distribution. The above results can be regarded as the weights of the observations. The iteration process would continue for a number of fixed iterations or until there is no significant change in the canonical correlations. The latter is used in this study [22]. Then, the optimal matrices a and b are recalculated by the weight factor.

In this study, based on the combination of [M.sub.1] and [M.sub.2], the final change detection index [Z'.sub.j] is calculated.

[mathematical expression not reproducible] (10)

[M.sub.(p)kj] is the kth band MAD variate of the pth pair of fused images for pixel j. This method can effectively reduce the falsely detected changes by considering [Z'.sub.j] values twice in (10).

This modified IR-MAD algorithm can not only alleviate the problem of spectral distortion that caused massive false change alarms in the process of using bitemporal images to generate the cross-fused images but also reduce the interaction between bands of multispectral images [14, 21]. Therefore, this algorithm can yield better change detection results in multitemporal images. Finally, the Otsu thresholding algorithm, which is based on histogram image segmentation [23], has effective performance and easy application, and it was applied to the modified IR-MAD image to obtain binary data of the changed and the unchanged area [23].

4. Experimental Result and Discussion

To evaluate the effectiveness of this method, we analyze and discuss the seasonal and interannual variations in the study area. We use the cross-fused and PCC change detection methods to compare with our result. In the cross-fused change detection method, original IR-MAD is applied between [CF.sup.H.sub.1] and [CF.sup.H.sub.2] images to extract the changed area. In the process of PCC, three general fused images are classified into 7 classes (water, bare land, meadow, cultivated area, city, forest, and mudflats) through the maximum likelihood classification.

To quantitatively compare the performance of these methods, ground truths of seasonal and interannual variation images were generated from GSA-fused images ([mathematical expression not reproducible]) by manually digitizing the changed areas of Shengjin Lake Nature Reserve as shown in red in Figures 3(a) and 3(e). The results are overlain on the multispectral images of 6 November 2013 and 25 July 2016, respectively. In the quantitative analysis process, the confusion matrix method was applied to evaluate the statistical accuracy of the tested methodologies, and some indices such as overall accuracy (OA), kappa coefficient (KC), commission error (CE), omission error (OE), and false alarm rate (FAR) were calculated [24]. The detailed quantitative change detection accuracy assessment results for each method are shown in Figure 3 and Table 2. The red color indicates the change pixels extracted from the change detection results of the different methods, and the results are overlain on the multispectral image of 25 December 2016.

Through the observation and analysis of the results in Figure 3, in areas where different ground types coexist (water, bare land, meadow, cultivated area, city, forest, and mudflats), the proposed method, compared to PCC based on the general fused image and cross-fused method, can more accurately detect the wetland change information and effectively reduce the change detection errors for interannual or seasonal wetland cover change. In the results of the PCC and cross-fused methods, some parts of the unchanged area are considered to be changed areas. As shown in Table 2, the CE value of PCC reaches 70%. In addition, the results of our study are more accurate than the PCC; the OA value reaches 90% and the FAR value reaches 0.02.

Figure 3 includes the whole study area, allowing for an initial visual assessment of the results of wetland change extent extraction. Figures 4 and 5 show the subimages extracted from "upper lake meadow" and "lower lake meadow" regions of Figure 3.

Figure 4 mainly consists of a complex area composed of water, meadow, cultivated area, city, and mudflats. In addition, the area shown in Figure 5 mainly consists of complex areas composed of water, meadow, forest, and mudflats. The blue and yellow colors indicate the commission and omission error, respectively, and the results are overlain on the multispectral image of 25 December 2016.

As shown in Figures 4 and 5, the PCC and cross-fused method are not sensitive to artificially cultivated areas and forest areas affected by seasonal changes, and they detect too many false positives caused by similar spectral characteristics.

The proposed method efficiently detects the changed area that corresponds to the complex area with similar spectral characteristics, improves the accuracy of wetland change detection, and minimizes the impacts of seasonality and artificiality. The proposed method also has good performance in meadow areas. However, it produces false positives in some mudflat edge regions, such as yellow areas in Figures 4(b) and 5(f). On the one hand, this is because spatial inconsistency occurred due to the different look angles of bitemporal imagery. On the other hand, the proposed method is based only on the NDWI, which is sensitive to water. As a result, some omission errors can occur in areas where the water level falls and the mudflats are exposed because the mudflats still contain a certain amount of water.

5. Conclusions

In this paper, we proposed an image-to-image change detection method using multitemporal images to quantify wetland cover changes; the method is based on a combination of a cross-fusion image and normalized difference index image. For multitemporal Landsat 8 OLI images, the GSA fusion method is used to generate cross-fusion images, and then NDVI and NDWI are extracted. The optimal change information was calculated through the modified IR-MAD, which used pairs of normalized difference index images and general fused images. The experimental results showed that the proposed method increases the accuracy of change detection and minimizes the error detection in the complex areas under different ground types. Especially in the cultivated area affected by manmade alterations, change information can be identified more accurately, and a lower FAR can be achieved. This allows us to help wetland managers implement effective management plans. Further, our method is of guiding importance in the monitoring of wetland health and wetland conservation.

Conflicts of Interest

The authors declare no conflict of interest.


This research was supported in part by the Natural Science Foundation of Anhui Province under Grant no. 1608085MD83, the Science and Technology project of the Department of Land and Resources of Anhui Province: Study on the Data Integration Technology for the Real Estate Registration (no. 2016-K-12), the Educational Commission of Anhui Province of China (no. KJ2018A0007), and the Department of Human Resources and Social Security of Anhui: Innovation Project Foundation for Selected Overseas Chinese Scholar.


[1] G. Liu, L. Zhang, Q. Zhang, Z. Musyimi, and Q. Jiang, "Spatiotemporal dynamics of wetland landscape patterns based on remote sensing in Yellow River Delta, China," Wetlands, vol. 34, no. 4, pp. 787-801, 2014.

[2] E. Adam, O. Mutanga, and D. Rugege, "Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: a review," Wetlands Ecology and Management, vol. 18, no. 3, pp. 281-296, 2010.

[3] A. Singh, "Review article digital change detection techniques using remotely-sensed data," International Journal of Remote Sensing, vol. 10, no. 6, pp. 989-1003, 1989.

[4] L. Yang, C. Homer, J. Brock, and J. Fry, "An efficient method for change detection of soil, vegetation and water in the Northern Gulf of Mexico wetland ecosystem," International Journal of Remote Sensing, vol. 34, no. 18, pp. 6321-6336, 2013.

[5] C. Munyati, "Wetland change detection on the Kafue Flats, Zambia, by classification of a multitemporal remote sensing image dataset," International Journal of Remote Sensing, vol. 21, no. 9, pp. 1787-1806, 2000.

[6] F. Bovolo, S. Marchesi, and L. Bruzzone, "A framework for automatic and unsupervised detection of multiple changes in multitemporal images," IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 6, pp. 2196-2212, 2012.

[7] D. Lu, P. Mausel, E. Brondizio, and E. Moran, "Change detection techniques," International Journal of Remote Sensing, vol. 25, no. 12, pp. 2365-2401, 2004.

[8] D. Renza, E. Martinez, I. Molina, and D. M. Ballesteros L., "Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper," Advances in Space Research, vol. 59, no. 8, pp. 2019-2031, 2017.

[9] J. Chen, X. Chen, X. Cui, and J. Chen, "Change vector analysis in posterior probability space: a new method for land cover change detection," IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 2, pp. 317-321, 2011.

[10] M. Hussain, D. Chen, A. Cheng, H. Wei, and D. Stanley, "Change detection from remotely sensed images: from pixel-based to object-based approaches," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 80, pp. 91-106, 2013.

[11] D. Renza, E. Martinez, and A. Arquero, "A new approach to change detection in multispectral images by means of ERGAS index," IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 1, pp. 76-80, 2013.

[12] A. A. Nielsen, "The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data," IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 463-478, 2007.

[13] Y. Byun, Y. Han, and T. Chae, "Image fusion-based change detection for flood extent extraction using bi-temporal very high-resolution satellite images," Remote Sensing, vol. 7, no. 8, pp. 10347-10363, 2015.

[14] B. Wang, J. Choi, S. Choi, S. Lee, P. Wu, and Y. Gao, "Image fusion-based land cover change detection using multitemporal high-resolution satellite images," Remote Sensing, vol. 9, no. 8, p. 804, 2017.

[15] C. Li, G. Beauchamp, Z. Wang, and P. Cui, "Collective vigilance in the wintering hooded crane: the role of flock size and anthropogenic disturbances in a human-dominated landscape," Ethology, vol. 122, no. 12, pp. 999-1008, 2016.

[16] M. Barter, L. Cao, L. Chen, and G. Lei, "Results of a survey for waterbirds in the lower Yangtze floodplain, China, in January-February 2004," Forktail, vol. 21, pp. 1-7, 2005.

[17] B. Aiazzi, S. Baronti, and M. Selva, "Improving component substitution pansharpening through multivariate regression of MS +Pan data," IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3230-3239, 2007.

[18] C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, "Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics," IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301-1312, 2008.

[19] R. B. Myneni, F. G. Hall, P. J. Sellers, and A. L. Marshak, "The interpretation of spectral vegetation indexes," IEEE Transactions on Geoscience and Remote Sensing, vol. 33, no. 2, pp. 481-486, 1995.

[20] A. K. Bhandari, A. Kumar, and G. K. Singh, "Improved feature extraction scheme for satellite images using NDVI and NDWI technique based on DWT and SVD," Arabian Journal of Geosciences, vol. 8, no. 9, pp. 6949-6966, 2015.

[21] B. Wang, S.-K. Choi, Y.-K. Han, S.-K. Lee, and J.-W. Choi, "Application of IR-MAD using synthetically fused images for change detection in hyperspectral data," Remote Sensing Letters, vol. 6, no. 8, pp. 578-586, 2015.

[22] P. R. Marpu, P. Gamba, and M. J. Canty, "Improving change detection results of IR-MAD by eliminating strong changes," IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 4, pp. 799-803, 2011.

[23] A. S. Abutaleb, "Automatic thresholding of gray-level pictures using two-dimensional entropy," Computer Vision Graphics and Image Processing, vol. 47, no. 1, pp. 22-32, 1989.

[24] R. G. Congalton, "A review of assessing the accuracy of classification of remotely sensed data," Remote Sensing of Environment, vol. 37, no. 1, pp. 35-46, 1991.

Yan Gao, Zeyu Liang, Biao Wang, Yanlan Wu, and Penghai Wu

School of Resources and Environmental Engineering, Anhui University, Hefei, Anhui 230601, China

Correspondence should be addressed to Biao Wang;

Received 30 November 2017; Revised 8 April 2018; Accepted 30 April 2018; Published 21 May 2018

Academic Editor: Biswajeet Pradhan

Caption: Figure 1: Multitemporal images of Shengjin Lake used in the experiment.

Caption: Figure 2: Workflow of the proposed methodology for wetland vegetation extraction using multitemporal satellite images. The superscript H and L represent high resolution and low resolution, respectively.

Caption: Figure 3: Results of wetland change area by using the tested methods: (a)-(d) interannual change detection result; (e)-(h) seasonal change detection result.

Caption: Figure 4: Detailed images from upper lake meadow regions of Figure 3: (a)-(d) interannual change detection result; (e)-(h) seasonal change detection result.

Caption: Figure 5: Detailed images from lower lake meadow regions of Figure 3: (a)-(d) interannual change detection result; (e)-(h) seasonal change detection result.
Table 1: Data specifications.

Date         Spatial resolution   Image size (pixels)

16/12/2016        MS: 30 m          MS: 3650 x 3570

                 PAN: 15 m         PAN: 1825 x 1785

Date           Wavelength ([micro]m)

06/11/2013   Band 2 blue: 0.450-0.515
25/07/2016   Band 3 green: 0.525-0.600
16/12/2016    Band 4 red: 0.630-0.680
              Band 5 NIR: 0.845-0.885
             Band 6 SWIR: 11.560-1.660
             Band 7 SWIR: 12.100-2.300
              Band 8 pan: 0.500-0.680

Table 2: Change detection accuracy results: overall accuracy (OA),
kappa coefficient (KC), commission error (CE), omission error (OE),
and false alarm rate (FAR).

                                   OA (%)    KC    CE (%)   OE (%)

Interannual    Proposed method     96.67    0.72   27.60    25.02
              Cross-fused method   92.07    0.52   58.97    15.02
                     PCC           86.26    0.39   70.22    11.13

Seasonal       Proposed method     93.06    0.76   14.88    24.00
              Cross-fused method   85.28    0.59   43.06    13.93
                     PCC           84.45    0.57   44.56    16.36


Interannual    Proposed method     0.02
              Cross-fused method   0.07
                     PCC           0.14

Seasonal       Proposed method     0.03
              Cross-fused method   0.15
                     PCC           0.15
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Gao, Yan; Liang, Zeyu; Wang, Biao; Wu, Yanlan; Wu, Penghai
Publication:Journal of Sensors
Geographic Code:7IRAN
Date:Jan 1, 2018
Previous Article:Analysis of Surface Acoustic Wave Propagation Velocity in Biological Function-Oriented Odor Sensor.
Next Article:A Robust Method for GPS/BDS Pseudorange Differential Positioning Based on the Helmert Variance Component Estimation.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |