Printer Friendly

An Improved Urban Mapping Strategy Based on Collaborative Processing of Optical and SAR Remotely Sensed Data.

1. Introduction

With the development of aerospace technology and remote sensing technique, more and more earth observation data archives are becoming available which increases the possibility of joint use of optical and SAR data, as well as different sensors at a regional or global level for urban monitoring. As a matter of fact, in the last decade, there have been a number of different approaches for urban mapping [1-3].

Some of these literatures focus on urban mapping using only optical remotely sensed data for many years in several parts of the world [4-7], because the optical remotely sensed data which is richer in spectral information would be on behalf of surface reflective and emissive spectrum. But the disadvantage of optical remotely sensed data is limited to distinct the same object with different spectrum, different objects with the same spectrum while SAR remote sensing data has an advantage of penetration which is suitable for analysis of surface roughness, structures, and dielectric constant and has an ability to distinguish different land cover types by its structure and shape. Hence, more and more literatures focus on algorithms developing for complicated surface urban mapping with SAR remotely sensed data [811]. One of the most important tasks of urban practitioners is to find simple yet effective approaches for urban extent extraction in optical and SAR data, as Gamba et al. cited and discussed the main studies exploiting the potential of different optical and SAR sensor data [12]. Integration of optical and SAR remotely sensed data in feature extraction and classification is attracting more and more attention recently [13]. In this paper, the most significative urban cover types for rapid urban mapping are extracted by means of combining optical and radar data. The purpose of this study is to explore a convenient way to obtain urban cover types by cooperating with results extracted from active and passive remotely sensed data.

The paper is organized as follows: overall strategy and methodology are presented in Section 2. The proposed methodology is applied on two different test areas with different representative surroundings, and rapid urban mapping results will be shown in Section 3. In Section 4, the analysis will be presented and conclusions will be given in Section 5.

2. Methodology

The overall procedure of this paper includes five parts: human settlement extraction, vegetation and water body extraction, rapid urban mapping, mature data fusion, and SVM and NN classification. This procedure can be graphically described as Figure 1. In particular, the first three steps can be summarized as rapid urban mapping stage, detailed as follows: human settlement is extracted from passive SAR data with method based on Gray-Level Cooccurrence Matrix (GLCM) and Local Indicators of Spatial Association (LISA) unsupervised method in the first step; then vegetation and water body are generated using Normalized Difference Vegetation Index (NDVI) and Modified Normal Differential Water Index (MNDWI) quantitative indexes, respectively; finally, these primary results are merged with a decision fusion algorithm to deal with omission/commission pixels and generate final urban cover map. More details of these three steps are described in Sections 2.1 and 2.2. A standard SVM classifier is selected to obtain urban land use/cover status map from original optical and SAR data and their fusion consequence. Ground truth of these two different test areas is made by different experts and using the same color legend as in Figures 4(b) and 5(b). We should mark one important comment here which is the ground truth maps were obtained by different remote sensing specialist manually. Instead of "reference data," the "ground truth" data is selected, because of the lack of "reference data," and we do not except a 100% accuracy of the ground truth, but the "ground truth" data we obtained is satisfied to validation procedure. So the level of accuracy shown below should be considered as matching the level of uncertainty in validation sets.

2.1. Information Extraction. In remotely sensed data analysis, spatial neighborhood of a pixel may contain more information than the pixel itself, and this phenomenon is more significant for SAR data because the speckle noise makes the single pixel value unreliable. Textures extracted from GLCM gauge the statistical properties of pixels neighborhood [14]. There are mass of texture information that can be drawn on the basis of GLCM, while variance and correlation are selected aimed at extracting human settlement. Due to the fact that only dual polarized data is available in this step for SAR data processing, algorithms based on GLCM features and the LISA detailed in paper [15] are selected. The selected LISA indexes are Moran's [I.sub.i], Geary [C.sub.i], and Getis-Ord G; [16], which can be described in (1), and they are functions of the pixel values ([x.sub.j]) in a n-neighborhood of the current one ([x.sub.j]) and the weights [w.sub.ij] are the elements of a "weight matrix" W, which defines the above-mentioned neighborhood. There are three stages for human settlement extraction on the basis of texture information extract based on GLCM as shown in Figure 2, and the algorithms are detailed in [14, 17, 18]. Processing window size is 3 times 3, cooccurrence shift is 1 for x and y direction, and gray scale quantization level is 64.

[mathematical expression not reproducible]. (1)

In this study, water body is extracted using threshold method based on MNDWI proposed by Xu [19], and vegetation is obtained with similar method based on NDVI. The thresholds are chosen by statistic and iteration method. There are two strategies for initial threshold value determination: (1) value obtained by automatic classification with symbol of layer properties in ArcGIS and (2) numerical analysis of NDVI gray image using MATLAB.

MNDWI = Green - MIR/Green + MIR NDWI = Green - NIR/Green + NIR. (2)

The final threshold value is adjusted according to the situation on the ground, where MIR, NIR, and Green represent the middle-infrared, near-infrared, and green band, respectively. Because there is no MIR band in ALOS AVNIR data NDWI is selected for ALOS data, while MNDWI is selected for Landsat dataset. Mathematically, NDVI can be calculated by the formula as

NDVI = NIR - RED/NIR + RED, (3)

where NIR is near-infrared channel and RED is red channel for remotely sensed data and for ALOS AVNIR-2 image they are channel 4 and channel 3, respectively.

2.2. Plurality Voting Fusion. Information integration is one of the most important steps for the explored rapid urban mapping strategy. Without a doubt, one of the most popular and promising integration strategies, the plurality voting method is chosen for its simplicity, convenience, and efficiency [20, 21]. There are weighted and unweighted voting schemes over voting method. For unweighted voting scheme, all outcomes have the same "authority" to decide the pixels [p.sub.i] to the class ck, while for weighted scheme the contribution/accuracy of each outcome is considered when a decision is taken. There are simple weighed vote, rescaled weighted vote, best-worst weighted vote, quadratic best-worst weighted vote, and WMV and more details about these mathematical model are found in [22]. In this paper, WMV scheme is selected, and the detailed application in this paper can be described as follows: discard nonbuilding area, nonwater body, and nonvegetation covers in the beginning; then the classification accuracy of each class is assigned as weight and WMV scheme is applied to all previous binary results. The accuracy of the integration is maximized through assigning weights in the theorem of WMV, and weight of each class is computed as

[mathematical expression not reproducible], (4)

where [[alpha].sub.k] is the individual accuracy of each class. There are two approaches to obtain the individual accuracy of each class [[alpha].sub.k]: (1) [[alpha].sub.k] was calculated after the classifiers results have been compared to ground truth and (2) [[alpha].sub.k] was calculated on classification result based on validation region of interest (ROI), except training ROI and test ROI. Supervise classification is on the basis of some a priori knowledge to select ROI, while, in our experiment, the specialist selected two ground truths manually taking place of the two sets of validation and test ROI. In this test, the first method is selected, and, in consequence work, the second method will be tried with more test areas. Let us denote [[??].sub.k] ([x.sub.i]) as pixel [x.sub.i]

predicted to class [C.sub.k], and prediction of final voting result can be described as

[mathematical expression not reproducible], (5)

when [[??].sub.k] ([x.sub.i]) and [c.sub.j] go into the same class and [delta] = 1; otherwise [delta] = 0.

3. Experiment Results

There are two scenes of optical and SAR remotely sensed data (Table 1) which were selected within and around the city of Xuzhou, China, in developing country and the capital of Pavia, Pavia, Italy, in developed country (Figure 2). The first dataset is comprised of ALOS AVNIR-2 and PALSAR images over Xuzhou city. Xuzhou is located in the northwest of Jiangsu province, which covers 11257 square km and has a population of more than ten million people. As a typical mining industrial city, Xuzhou has been called "Coal Sea of eastern China." The second dataset is comprised of Landsat TM multispectral image and ERS-2 SAR image collected by the European satellite. The test case covers an area around the town of Pavia, in northern Italy.

Optical and SAR data are geometric coregistered using image-to-image method based on GCP method. Over 25 ground control points were selected and the root mean square errors for all images were less than 0.5 pixels. The datasets of Pavia city has been preprocessing when obtained. Dataset of Xuzhou city has been preprocessed using ENVI FLAASH ALOS AVNIR-2 atmospheric correction kit developed by third party. In the developed module, sensor type is "UNKNOW-MSI," satellite altitude is 691.65 km, and pixels size is 10 m; in multispectral settings, the avnir2.sli data was selected as filter function file. The cause of choosing those two sites would be that they are representative of the different situations. The downtown center is high-density large settlement in Xuzhou city and the city of Pavia is composed of well-organized sparse settlements while large amount of vegetation areas surround it. The considered land cover types are the most representative and significant three cases: water body, human settlement, and vegetation mostly include farmland covers area and wood fields, and additional, unclassified area which appears in results of proposed method and ground truth maps is marked as black.

As described in the Methodology, firstly, texture information is calculated for human settlement extracting automatically from SAR data; then vegetation and water body cover maps are extracted with NDVI and MNDWI indexes. Finally, the finally urban covering statues map is obtained by a "soft" fusion of previous results. The overall classification results are verified by corresponding ground truth as shown in Figures 3 and 4(a) for classification result and in Figures 3 and 4(b) for ground truth.

In order to make a further evaluation, an analysis based on supervised classification is performed. In this experiment, the SVM classifier is selected to divide research area into the same land cover types. For SVM classifier, the kernel type is "radial basis function," Gamma in kernel function is 0.333, penalty parameter is 100, pyramid level is 0, and classification probability threshold is 0.00. For NN classifier, two hidden layers were selected, and number of training iterations is 1000, training threshold contribution is 0.90, training rate is 0.20, training momentum is 0.9, and training RMS exit criteria are 0.1. The results can be seen as Figures 5 and 6.

Outcomes of the proposed synergy strategy are also compared with classification results based on image fusion results of optical and SAR data. A few of mature image fusion algorithms including Brovey [23], GS [24], PCA [25], and a trous wavelet transform applied in the Hue Intensity Saturation space (ATWT + HIS) [26] as well as LYR and CT are selected to fuse optical and SAR data. The SVM [27] and NN classifier are applied on the fused datasets. And the representational classification results of fused data are shown as Figure 7 for SVM classifier and Figure 8 for NN classifier.

4. Discussion

As the paper aims to design a strategy for rapid urban mapping based on the synergy of optical and SAR remotely sensed data, it should be characterized with straightforwardness, time efficiency, robustness, and easy operation. In this stage, we compare time consumption of the main steps; for instance, building area extraction from SAR data is 75.480919 seconds, while wavelet fusion based on wavelet transform and consistency detection for Xuzhou city is 2225.924652 seconds, and all these tests take place under Mac Book Pro computer configuration which is CPU of 2.3 GHz Intel Core i5, RAM of 8 GB 1333 MHz DDR3, 320 GB hard disk. And the results of the proposed strategy are analyzed from three aspects as follows.

4.1. Merged Accuracy Analysis. In order to evaluate accuracy of the proposed strategy, the obtained results are compared with the single land cover type obtained step by step. In this study, the accuracy of merged results is compared with accuracy of vegetation and water body types extracted from threshold NDVI and MNDWI and human settlement extracted using LISA and GLCM method. For the first test area Xuzhou city, original datasets of ALOS PALSAR and ALOS AVNIR-2 optical data are shown as Figure 3, and merged classification results are shown as Figure 5, and the statistical accuracy of single category is shown as Table 2.

From Table 2 we can see that there is a significant accuracy improvement for every category after a "soft" merge. There is a 40.69% improvement and 7.93% improvement after merge for vegetation area and human settlement, respectively, in the Xuzhou city. And there is a 47.01% and 19.15% improvement for vegetation and human settlement in the Pavia city. For water body, the most relatively stable land cover type, which has slim accuracy improvement, there is a 2.16% improvement for Xuzhou city and a 6.42% improvement for Pavia city.

4.2. Comparison with Supervised Classification Algorithm. Overall accuracy and kappa values of SVM classification results for optical, SAR, and merged data are shown as Table 3. As can be seen, with a qualitative sense from a comparison of Figures 7, 8,5, and 6 and a quantitative mineral from Table 3, there are a 13.65% and a 35.69% OA improvement for merged result compared to classification results of optical and SAR data using SVM classifier in Xuzhou city. And there are a 6.53% and 15.29% improvement for merged result compared to classification results of optical and SAR data using SVM classifier in Pavia city. There is less OA improvement using ERS and Landsat dataset compared to using PALSAR and AVNIR dataset. The reason for this phenomenon might be high accuracy of human settlement and water body in Xuzhou city, and the designed human settlement extraction algorithm is more suitable with high space resolution PALSAR data compared to moderate resolution ERS SAR data.

4.3. Comparison with Classification Based on Data Fusion. Overall urban mapping accuracy is evaluated and compared with results of the proposed strategy, shown as in Table 4. The numbers in Table 4 indicate that the proposed strategy improves classification accuracy significantly compared to mature fusion method. Table 4 shows that the proposed method obtained the highest OA score in Xuzhou and Pavia city. In total, it is clear that there is an evident accuracy improvement when image fusion algorithms are applied to optical and SAR data with respect to solo optical and/or SAR data. In particular, the proposed strategy has much more advantage than mature image fusion method.

5. Conclusions

In this paper, we have proposed a convenient method combining optical and SAR data for rapid urban mapping. The proposed synergy of active and passive method is able to map human settlement at an accuracy level of 99.31% and 88.01%, water body at an accuracy level of 91.92% and 77.49%, and vegetation at an accuracy level of 85.73% and 91.72% for Xuzhou and Pavia, respectively. These results are much better than results extracted from unitary remote sensing sensor shown as Table 2. In comparison with supervised classification, OA of our procedure are higher than those results from SVM classification. The advantage of the proposed procedure is obvious when compared with classification results based on mature fusion outcomes. All these demonstrate that our method has robust functionality for high accuracy rapid urban mapping in the selected research areas. The method will generate valid results so long as research area is covered by medium and/or high resolution SAR data (in our search the ERS and PALSAR data) and optical remotely sensed data with green, red, and near-infrared channels. The major contributions of this research can be concluded as follows: from an application perspective it introduces a convenient rapid urban mapping strategy, and it is one of the first studies merging optical and SAR data for urban cover status map extraction and comparing it to unitary artificial satellite sensor data. Human settlement is extracted from active PALSAR/ERS data, while water body and vegetation are obtained from passive AVNIR-2/TM data.

Nevertheless, the proposed strategy does have some limitations. The most obvious aspect is that the proposed method is on the basis of texture information of SAR data and the very specific spectral band of optical data. For instance, human settlement extraction step does not work well or even does not work for SAR data with very coarse resolution, while optical data without near-infrared channel should switch to another solution for water body or vegetation extraction, yet seldom aircraft remote sensing sensor would be missing these commonly used bands. The other limitation is that the finally extracted results of vegetation and water body land cover types will be affected by threshold, and there should be different thresholds for different research area.

https://doi.org/10.1155/2017/9361592

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China under Grants nos. 41601450 and 41401403; the Key Research Project Plan of Colleges and Universities in Henan Province (no. 16A420004), and Ph.D.

Fund of Henan Polytechnic University (nos. B2015-20, B2014018). The authors would like to give their sincere thanks to Professor Peijun Du from Nanjing University for his suggestions and Professor Paolo Gamba from University of Pavia (UNIPV), Italy, for his suggestion of this research and providing ERS and Landsat data over Pavia area.

References

[1] P. Gamba, F. Dell'Acqua, and G. Lisini, "BREC: The built-up area RECognition tool," in Proceedings of the 2009 Joint Urban Remote Sensing Event, pp. 1-5, May 2009.

[2] K. Khoshelham, C. Nardinocchi, E. Frontoni, A. Mancini, and P. Zingaretti, "Performance evaluation of automated approaches to building detection in multi-source aerial data," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 65, no. 1, pp. 123-133, 2010.

[3] D. Lu and Q. Weng, "A survey of image classification methods and techniques for improving classification performance," International Journal of Remote Sensing, vol. 28, no. 5, pp. 823-870, 2007

[4] P. Zhong and R. Wang, "A multiple conditional random fields ensemble model for urban area detection in remote sensing optical images," IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 12, pp. 3978-3988, 2007

[5] J. Inglada, "Automatic recognition of man-made objects in high resolution optical remote sensing images by SVM classification of geometric image features," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, no. 3, pp. 236-248, 2007.

[6] K. Wikantika, A. Sinaga, F. Hadi, and S. Darmawan, "Quick assessment on identification of damaged building and land-use changes in the post-tsunami disaster with a quick-look image of IKONOS and Quickbird (a case study in Meulaboh City, Aceh)," International Journal of Remote Sensing, vol. 28, no. 13-14, pp. 3037-3044, 2007

[7] P. J. Curran and G. Llewellyn, "Post-earthquake building collapse: a comparison of government statistics and estimates derived from," in SPOT HRVIR data, vol. 26, pp. 2731-2740, 2005.

[8] P. Gamba, M. Aldrighi, M. Stasolla, and E. Sirtori, "A detailed comparison between two fast approaches to urban extent extraction in VHR SAR images," in Proceedings of the 2009 Joint Urban Remote Sensing Event, pp. 1-6, May 2009.

[9] D. Brunner, G. Lemoine, L. Bruzzone, and H. Greidanus, "Building height retrieval from VHR SAR imagery based on an iterative simulation and matching technique," IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1487-1504, 2010.

[10] X. L. Ding, G. X. Liu, Z. W. Li, Z. L. Li, and Y. Q. Chen, "Ground subsidence monitoring in Hong Kong with satellite SAR interferometry," Photogrammetric Engineering and Remote Sensing, vol. 70, no. 10, pp. 1151-1156, 2004.

[11] F. Dell'Acqua and P. Gamba, "Texture-based characterization of urban environmental on satellite SAR images," IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 1, pp. 153159, 2003.

[12] C. Corbane, J.-F. Faure, N. Baghdadi, N. Villeneuve, and M. Petit, "Rapid urban mapping using SAR/optical imagery synergy," Sensors, vol. 8, no. 11, pp. 7125-7143, 2008.

[13] F. Yu and H. T. Li, "Synthesis of multi-source remote sensing data for classification based on bayesian theory and MRF," Journal of Remote Sensing, vol. 16, pp. 809-826, 2012.

[14] P. Gamba, F. Dell'Acqua, G. Lisini, and F. Clsotta, "Improving building footprints in InSAR data by comparison with a Lidar DSM," Photogrammetric Engineering and Remote Sensing, vol. 72, no. 1, pp. 63-70, 2006.

[15] P. Gamba, M. Aldrighi, and M. Stasolla, "Robust Extraction of Urban Area Extents in HR and VHR SAR Images," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 4, no. 1, pp. 27-34, 2011.

[16] J. L. Ping, C. J. Green, R. E. Zartman, and K. F. Bronson, "Exploring spatial dependence of cotton yield using global and local autocorrelation statistics," Field Crops Research, vol. 89, no. 2-3, pp. 219-236, 2004.

[17] M. Stasolla and P. Gamba, "Spatial indexes for the extraction of formal and informal human settlements from high-resolution SAR images," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 1, no. 2, pp. 98-106,2008.

[18] P. Gamba and M. Aldrighi, "SAR data classification of urban areas by means of segmentation techniques and ancillary optical data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 4, pp. 1140-1148, 2012.

[19] H. Xu, "Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery," International Journal of Remote Sensing, vol. 27, no. 14, pp. 3025-3033, 2006.

[20] Y. Tarabalka, J. A. Benediktsson, and J. Chanussot, "Spectral-spatial classification of hyperspectral imagery based on partitional clustering techniques," IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 8, pp. 2973-2987, 2009.

[21] Y. Tarabalka, J. Chanussot, J. A. Benediktsson, J. Angulo, and M. Fauvel, "Segmentation and classification of hyperspectral data using watershed," in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, pp. III652-III655, Boston, Mass, USA, July 2008.

[22] F. Moreno-Seco, J. M. Inesta, P. J. de Leon, and L. Mico, "Comparison of Classifier Fusion Methods for Classification in Pattern Recognition Tasks," in Structural, Syntactic, and Statistical Pattern Recognition, vol. 4109 of Lecture Notes in Computer Science, pp. 705-713, Springer, Berlin, Germany, 2006.

[23] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, "A comparative analysis of image fusion methods," IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1391-1402, 2005.

[24] V. Karathanassi, P. Kolokousis, and S. Ioannidou, "A comparison study on fusion methods using evaluation indicators," International Journal of Remote Sensing, vol. 28, no. 10, pp. 2309-2341, 2007.

[25] J. Dong, D. Zhuang, Y. Huang, and J. Fu, "Advances in multisensor data fusion: algorithms and applications," Sensors, vol. 9, no. 10, pp. 7771-7784, 2009.

[26] L. Alparone, S. Baronti, A. Garzelli, and F. Nencini, "Landsat ETM+ and SAR image fusion based on generalized intensity modulation," IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 12, pp. 2832-2839, 2004.

[27] C. Chang and C. Lin, "LIBSVM: a Library for support vector machines," ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011.

Ruimei Han, (1,2) Pei Liu, (1,2) Han Wang, (1,2) Leiku Yang, (1,2) Hanwei Zhang, (1,2) and Chao Ma (1,2)

(1) Key Laboratory of Mine Spatial Information Technologies of SBSM, Henan Polytechnic University, Jiaozuo 454003, China

(2) School of Surveying and Mapping Land Information Engineering, Henan Polytechnic University, Jiaozuo, Henan 454003, China

Correspondence should be addressed to Pei Liu; cumtlp@qq.com

Received 8 July 2017; Revised 23 September 2017; Accepted 11 October 2017; Published 12 December 2017

Academic Editor: Sergey A. Suslov

Caption: FIGURE 1: Flow chart of the proposed strategy.

Caption: FIGURE 2: Research area.

Caption: FIGURE 3: Experimental result over Xuzhou downtown area.

Caption: FIGURE 4: Experimental result over Pavia city.

Caption: FIGURE 5: SVM classification results of Xuzhou city.

Caption: FIGURE 6: SVM classification results over Pavia city.

Caption: FIGURE 7: SVM classification results of fused data.

Caption: FIGURE 8: NN classification results of fused data.
TABLE 1: Datasets.

              Date        SptR (m)     SpeR (mm)/Pm      Size (pixels)

PALSAR    Nov. 12, 2008   10 * 10       L-band, HH        1607 * 1347
AVNIR-2   Nov. 09, 2008   10 * 10      NIR, R, G, B       1607 * 1347
ERS-2     Oct. 03,1994    30 * 30       C-band, VV         787 * 787
TM        Apr. 07,1994    30 * 30    MIR, NIR, R, G, B     787 * 787

Note. SptR: spatial resolution. SpeR/Pm: spectral resolution
or polarimetric model.

TABLE 2: Classification accuracy of selected areas.

Combination                SOA               MOA
strategy           Xuzhou   Pavia    Xuzhou   Pavia

Human settlement   91.38%   68.86%   99.31%   88.01%
Water body         89.76%   71.37%   91.92%   77.79%
Vegetation         45.04%   44.71%   85.73%   91.72%

Note. SOA: single category accuracy before merging.
MOA: accuracy after merging.

TABLE 3: Comparison of classification accuracy with SVM.

             OA     Kappa   OA (S)   Kappa   OA (M)   Kappa
           (OPT)    (OPT)             (S)              (M)

SVM (XZ)   83.65%   0.63    61.61%   0.31    97.30%   0.91
SVM (PV)   81.95%   0.53    73.19%   0.16    88.48%   0.75

Note. SVM (XZ): SVM classification results of Xuzhou; SVM (PV): SVM
classification result of Pavia; OA (OPT/S/M): OA of optical/SAR/
merged data; kappa (OPT/S/M): kappa coefficient of optical/SAR/merged
data.

Table 4: Comparison of urban mapping accuracy with fused mapping
results.

Area                                 OA
         LRS       PC       BT       GS      HSV       CT       M
XZ
  SVM   86.88%   91.57%   86.38%   91.21%   85.36%   84.01%   97.30%
  NN    87.50%   94.61%   97.71%   92.31%   91.84%   75.10%   97.30%
PV
  SVM   84.58%   89.36%   57.04%   86.40%   84.72%   79.66%   88.48%
  NN    86.18%   77.86%   24.83%   84.37%   83.19%   78.86%   88.48%
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Han, Ruimei; Liu, Pei; Wang, Han; Yang, Leiku; Zhang, Hanwei; Ma, Chao
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2017
Words:4510
Previous Article:Hybrid Adaptive Bionic Fuzzy Control Method.
Next Article:On the Sums of Powers of Chebyshev Polynomials and Their Divisible Properties.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |