# Statistical Downscaling of Temperature with the Random Forest Model.

1. IntroductionGlobal climate models (GCMs) are considered the most credible tools for the projection of future global climate change [1]. However, there is a general mismatch between the spatial and temporal resolution of the GCM output and regional scale climate change impact studies. Various techniques have been developed to downscale GCM outputs to finer scales. These methods are widely divided into dynamic (physical) and statistical (empirical) downscaling [2, 3]. Because of the complexity in modeling and computing dynamic downscaling, statistical downscaling techniques have been used widely in climate change studies due to their simplicity and ease of implementation.

Statistical downscaling techniques can be divided into three categories: weather typing, weather generators, and regression-based methods. Various models have been developed and applied in the downscaling of temperature, like linear regression [4, 5], canonical correlation analysis (CCA) [6, 7], artificial neural networks (ANN) [8], support vector machines (SVM) [9], and so forth. Some comparison studies have been made in the past. Schoof and Pryor [10] demonstrated that ANN models give better estimates than multiple linear regression (MLR) models for daily temperature downscaling at Indianapolis. Kostopoulou et al. [11] demonstrated that MLR and CCA are superior to ANN in the simulation of minimum and maximum temperatures over Greece. Duhan and Pandey [12] compared MLR, ANN, and the least square support vector machine (LS-SVM) models to downscale the temperature of the Tons River basin in India and demonstrated that LS-SVM models perform better than ANN and MLR models. These comparison studies indicate that none of the aforementioned methods can assure an accurate estimate of temperature under different situations.

The predictor selection is critical for developing a statistical downscaling model. Suitable predictors should be informative, and the relationship between the predictors and predictands should be stationary [13]. Informative predictors can be identified using statistical measures, such as the Pearson, Spearmen, and Kendall correlation analysis [9], CCA [14], maximum covariance analysis (MCA) [15], partial correlation (PAR) [16-18], and principal component analysis (PCA) [19, 20]. Interactive model fitting approaches are also used in predictor selection [21]. However, some limitations are found during the application, such as the limited ability of traditional correlation analysis for interpreting nonstationary and nonlinear relationships [22]. Therefore, a precise statistical downscaling method with an inbuilt predictor selection mechanism will be helpful for researchers studying climate change impact.

The random forest (RF) model is an ensemble machine learning technique based on a combination of classification or regression methods and statistical learning theory [23]. RF models have been well applied in various fields, such as risk analysis [24], ground water studies [25], remote sensing analysis [26], and flood hazard assessment [27] and especially show advantages in land cover classification [28-31]. There are two important advantages of RF models. The first is the ability to handle large datasets with correlated conditional variables, because it includes precision in the prediction, is nonparametric, and is robust in the presence of outliers, noise, and overfitting [23, 32]. The second is the inbuilt variable importance evaluation. By permuting the variables randomly, each variable can be compared to the prediction results and evaluated for its importance [33]. Based on this body of knowledge, RF should be, in theory, highly applicable to downscaling and able to rectify multivariable and nonlinear issues. Eccel et al. [34] adopted RF with four linear and nonlinear models in the postprocessing of two numerical weather prediction models for the prediction of minimum temperatures in an alpine region. However, the RF just served as one of the comparative models in the study. The advantages and disadvantages of the RF and its applicability in statistical downscaling have not been studied in detail.

The purpose of this study is to fully investigate the applicability of RF for statistical downscaling of temperature. The predictors are the 26 large-scale variables derived from the National Center for Environmental Prediction (NCEP) reanalysis daily dataset, and the predictands are the observed temperatures at 61 national standard stations located in the Pearl River basin. The RF is used to capture the complex relationship between selected predictors from the NCEP data and observed daily mean temperature from these stations. A comparison study was conducted involving the application of MLP, ANN, and SVM models. The PCA and PAR methods were used in predictor selection for the comparative models for a comprehensive study.

2. Study Area and Data Description

2.1. Study Area Description. The Pearl River (97[degrees]39'E~ 117[degrees]18'E; 3[degrees]41'N~29[degrees]15'N) is the second largest river in China with a drainage area of 4.54 x [10.sup.5] [km.sup.2], of which 4.42 x 105 [km.sup.2] is located in China [35, 36]. The Pearl River basin (Figure 1) is located in tropical and subtropical climate zones, the annual temperature is 14-22[degrees]C, and the annual precipitation is 1200-2200 mm. The distribution of precipitation is gradually reduced from east to west. This regional distribution is significantly different, and the interannual change is large. The precipitation is concentrated during April-September [36], accounting for 72-88% of the annual precipitation [35].

The Pearl River basin is a rich water resource. The water resource in the entire river basin is 4700 cubic meters per capita, equivalent to 1.7 times the national per capita rate, but the spatial and temporal distribution of the water resource are uneven, and basin flooding, waterlogging, drought, and saltiness create frequent natural disasters. The Pearl River basin is a highly developed region, having a prominent position in the economic development of China.

2.2. Data Description

2.2.1. Temperature Data. The observed meteorological data used in this study are the daily mean temperatures at 61 national standard stations in the Pearl River basin. A continuous data series for the period of 1961-2005 was selected for the study. These observations were obtained from the National Climate Center, which is in charge of monitoring, collecting, compiling, and releasing high quality hydrological data in China. The observed mean temperature and standard deviation at the meteorological stations are shown in Figure 2. The mean daily temperature shows an increasing trend from the north to south in the period. However, the standard deviation showed an opposite trend.

2.2.2. Large-Scale Atmospheric Variables. The NCEP reanalysis daily data were downloaded from the website, http://www .cdc.noaa.gov, which included 26 large-scale atmospheric variables at a scale of 2.5[degrees] x 2.5[degrees] that were derived from the dataset over the period of 1961-2005. The details of the predictors are shown in Table 1. The NCEP data were interpolated to each station using the bilinear interpolation method which has been widely used in statistical downscaling [37-39].

3. Downscaling by Using the Random Forest Method

3.1. Random Forest Method

3.1.1. Methodology. The random forest (RF) method is an enhanced classification and regression tree (CART) method proposed by Breiman in 2001, which consists of an ensemble of unpruned decision trees generated through bootstrap samples of the training data and random variable subset selection.

As shown in Figure 3, the RF is composed of a set of CARTs. The accuracy of the RF prediction depends on the strength of the individual CARTs [23]. Each CART consists of a root node, internal nodes, and leaves. Each internal node is associated with a test function to split the incoming data. For regression trees, splitting is made in accordance with a squared residuals minimization algorithm, which implies that the expected sum variances in the two resulting nodes should be minimized, as shown in

[mathematical expression not reproducible]. (1)

Here, [p.sub.l] and [p.sub.[tau]] are fractions of samples in the left and right nodes, var([Y.sub.l]) and var([Y.sub.[tau]]) are response vectors for corresponding left and right child nodes, and [x.sub.j] [less than or equal to] [x.sup.R.sub.j], j = 1, 2, ..., M are optimal splitting questions.

Compared to traditional CART methods that use whole data sets, the RF trains each individual CART on bootstrap resamples (M samples) of the total dataset. Instead of using all the features, the RF uses a random selection of features to split each node. The best split is chosen among a randomly selected subset of Ntry input variables at each node. The tree is then grown to the maximum size without pruning. In this way, M CARTs are grown, and the final output is the average of the predictions of those trees.

3.1.2. Importance of Variables. The RF performs an inbuilt cross-validation in parallel to the training process by using out-of-bag (OOB) samples, which are not chosen during the bootstrap split. In the regression mode, the total learning error is obtained by averaging the prediction error of each individual tree using their OOB samples, as shown in

[mathematical expression not reproducible], (2)

where n is the total number of OOB samples, [??]([X.sub.i]) is the RF output corresponding to a given input sample [X.sub.i], and [Y.sub.i] is the observed output.

The RF provides two methods of evaluating the importance of each variable [27]. The first method evaluates the variable importance based on how much poorer the prediction will be if the variable is permuted randomly. The prediction errors of the OOB samples of each tree (termed [E.sub.OOB1]) are calculated during the training procedure. At the same time, each input variable in the OOB samples is permuted one at a time. These modified datasets are also predicted by the tree (termed [E.sub.OOB2]). At the end of the training procedure, the importance of each variable is obtained by averaging the difference between EOOB1 and EOOB2. It is then normalized by the standard deviation. The second method is based on the calculation of the node impurity criterion. As illustrated in (1), we can calculate how much the split decreases the node impurity. For regression trees, the decrease of the node impurity can be calculated using the difference between the residual sum of squares (RSS) before and after the split. The importance of a variable can be rapidly calculated by combining the decreases of node impurity for the variable over all trees [40].

3.2. Model Implementation and Validation. The RF is utilized to simulate the nonlinear relationship between the NCEP predictors and the observed temperatures at the 61 stations. In this study, the RF algorithm is implemented in the R programming language with a package "random forest," which has built-in functions to measure variable importance. As illustrated previously, M and Ntry are two sensitive parameters in the RF models. Ntry is the square root of the total number of variables [41]. M influences the convergence of the RF and can be determined through the OOB error.

For model development, the daily mean temperature series from the national standard stations and the large-scale atmospheric variables of the NCEP data are divided into two datasets. The first 31 years (1961-1991) are used for calibrating the regression model, while the remaining 14 years of data (1992-2005) are used to validate the model. For testing the performance of the proposed model, two data-driven models, ANN and SVM, are selected for model comparison, which are commonly used instatistical downscaling. The three-layer back propagation artificial neural network (BP-ANN) and LS-SVM are adopted, which have been applied successfully in downscaling temperature [12]. The MATLAB functions, "newff" and "tunelssvm," are employed for obtaining the values of the models parameters. Multiple linear regression analysis is adopted for the MATLAB implementation. The PAR [18] and PCA [19, 20] methods are used in predictor selection for the MLR, ANN, and SVM models.

3.3. Model Performance Analysis. Five criteria are selected to evaluate the performance of the RF and comparative models, including the Nash-Sutcliffe model efficiency index (Nash), root mean square error (RMSE), mean absolute error (MAE), correlation coefficient (R), and model Bias (Bias), which are defined as

[mathematical expression not reproducible]. (3)

Here, [y.sub.obs] is the vector of the observed predictands, [bar.[y.sub.obs]] is the mean of the observed predictands, [y.sub.sim] is the vector of the simulated predictands, and [[bar.y].sub.sim] is the mean of the simulated predictands. In general, a higher Nash and R indicate better model efficiency. In contrast, smaller values of RMSE, MAE, and Bias indicate higher accuracy in the model prediction.

4. Result Analysis and Discussion

4.1. The Choice of Predictors. One of the most important steps in the development of downscaling models is the choice of appropriate predictors [42]. The RF can evaluate the relative contribution of each predictor for downscaling results by combining the RSS decreases over all trees, which makes it convenient in choosing predictors for the RF. Using two of the stations as examples, Figure 4 shows the relative importance of the predictors for the predictands at Guangzhou and Nanning Stations, where the number of the predictor has been given as indicated in Table 1. For a comprehensive investigation of the predictor's importance in the Pearl River basin stations, the rank (the most important predictor is indicated as 1) of the relative importance of the predictor in each station is calculated and indicated in Figure 5.

According to Figures 4 and 5, the number 26 predictor, the mean temperature at 2 m height, is the most important predictor for all RF models at the 61 stations. Similarly, the number 25 predictor, the surface-specific humidity, ranked second for predictor importance at all stations. In contrast, the number 5 predictor, surface vorticity, is the least important predictor for all stations. The ranks of the other predictors vary for the different stations, which may be caused by the meteorological and geographical differences of the stations.

Because the OOB samples can provide unbiased estimation of the RF model performance, the rationality of including each factor can be tested using the MSEs of the OOB (indicated by EOOB). The predictors of each station are screened using the ranks of relative importance, which were evaluated in previous step. Then the MSE of the OOB samples in each station is calculated with the increase of the predictor combination and plotted in Figure 6.

The results show that EOOB generally decreases, but at a decreasing rate as more predictors are included. This indicates that the RF avoids overfitting successfully. However, the improvement of model performance by including more predictors is slight when the predictor number exceeds a certain number, which is seven in this study. With enough computation resources, all predictors are chosen in the RF modeling to downscale the temperature for these stations.

4.2. Comparative Study. The performance of the RF model is compared with that of the MLR, ANN, and SVM models. Two predictor selection methods, PAR and PCA, are applied in the four models.

The partial correlations of the 26 predictors with the predictands are shown in Figure 7. It can be observed that the ranks of the predictors' partial correlation vary in different stations. In general, the number 1 predictor, mean sea level pressure, has the largest partial correlation in most of the stations. However, the number 5 and number 8 predictors, corresponding to surface vorticity and 500 hPa airflow strength, have the smallest partial correlations in the majority of the stations.

The partial correlation coefficients are used to decide the variables that are included in the input combination. Based on former studies [18,43,44], a combination of seven predictors with the highest partial correlation coefficients are selected and applied in the MLR, ANN, and SVM models. These results are marked as MLR-par, ANN-par, and SVM-par.

PCA, which was commonly used by previous researchers, is also used in the comparative study. Before PCA, the predictors are standardized by subtracting the mean from the original values and then dividing the results by the standard deviation of the original variables. The PCA method is then applied to the standardized NCEP predictor variables to extract principal components (PCs) that are orthogonal. The obtained PCs preserve more than 90% of the variance present at each station. Then, the PCs are used in the MLR, ANN, and SVM modeling, and these results are marked as MLR-pca, ANN-pca, and SVM-pca.

The calibration and validation results of the RF and comparative models are summarized in Table 2, and the average Nash in the calibration and validation periods for all models are plotted in Figure 8.

The results showed that the RF is superior to the comparative models in the calibration and validation periods. According to the average values of Nash, RMSE, MAE, R, and Bias, the RF for the 61 stations are 0.98, 0.80, 0.58, 0.99, and 0.00 in the calibration period and 0.94, 1.46, 1.12, 0.97, and 0.21 in the validation period, respectively. All of these criteria are superior to those of the comparative models. It can also be observed in Figure 8 that the RF shows higher precision and more stability over the other models in the study area.

The results of the two parameter selection are also compared for the MLR, ANN, and SVM models. The PAR is superior to PCA in the MLR, ANN and SVM models. For the ANN models, the average R are similar in both the calibration and validation period; however, the average Nash, RMSE, MAE, and Bias of the results using PAR are superior to those of PCA with decreases of 0.01, 0.12, 0.08, and 0 in the calibration period and 0.02, 0.20, 0.15, and 0.28 in the validation period, respectively. For the SVM models, the increases in average Nash and R are 0.03 and 0.01 in the calibration period and 0.04 and 0.02 in the validation period, respectively. The decreases of average RMSE, MAE, and Bias are 0.21, 0.16, and 0.01 in the calibration period and 0.33, 0.25, and 0.28 in the validation period, respectively. For the same, the PAR is superior to PCA for MLR for most of the criteria, with increases in Nash and R and decreases in RMSE and MAE.

The spatial distributions of model precision of the RF and the comparative models are shown in Figure 9, in which Nash is selected as the evaluating criteria. For the comparative results, the PAR was used in the MLR, ANN, and SVM modeling. It is shown that the RF is superior to the comparative models at most of the individual stations. In addition, it can be observed that the precision of the RF in stations located in the plain region is higher than that in the mountain regions, which indicates the limited applicability in complex terrains.

5. Summary and Conclusions

Statistical downscaling models are effective in solving the mismatch between large-scale climate models and local scale hydrological responses. In this study, a statistical model based on the random forest method was proposed and applied to the Pearl River basin. The objective of this study was to investigate whether the RF approach could successfully simulate the complicated relationship between the predictors and predictands. The daily mean temperature observations from 61 stations in the Pearl River Basin and the NCEP reanalysis daily data from 1961 to 2005 were selected in order to compare the results of the RF model with those of the MLR, ANN, and SVM models. The following summarizes the discussion points and provides conclusions derived from this analysis:

(1) The RF model successfully simulated the relationship between the predictors and predictands and performed better than the MLR, ANN, and SVM models. According to five statistical criteria, the RF showed the highest model efficiency in both the calibration and validation periods. In addition, it was observed that the model efficiency of the RF increased continuously by considering more predictors. In this study, all 26 predictors from the NCEP were considered in the RF modeling. By taking full advantage of the information in the predictors and avoiding the influence of noise, the RF model performance dominated the other results.

(2) The built-in variable importance evaluation process and the OOB samples in the RF made predictor selection convenient. The built-in variable importance evaluation process ranked the importance of each predictor for the prediction of the predictands. The OOB samples gave the unbiased estimation of the model efficiency. Both of these were helpful in the predictor selection. Although all predictors were considered in this study, they will be most valuable when the RF is used in more complex downscaling problems, such as precipitation downscaling.

(3) The spatial distribution of model precision at the individual stations for the RF and the comparative models was also discussed. Although the RF was superior to the comparative models for most of the stations, there was still room for improvement for the prediction accuracy in mountainous areas.

As this study mainly discussed the development and comparison of temperature downscaling methods, future projects in the Pearl River basin were not further explored. We will pursue further research in the application of the RF to additional meteorological elements, such as the precipitation. This will be helpful in understanding the strength and limitation of the RF method. Furthermore, the surface parameters, like the elevation, slope, vegetation, and so forth, also influence the distribution of these meteorological elements, especially in complex terrain [45]; we will explore further in considering these parameters as part of model input.

https://doi.org/10.1155/2017/7265178

Conflicts of Interest

The authors declare no conflicts of interest.

Authors' Contributions

Bo Pang contributed to the literature review, statistical analysis, manuscript preparation, and editing; Jiajia Yue and Gang Zhao contributed to the model design and simulation; and Zongxue Xu contributed to the manuscript revision and review and supervised the study.

Acknowledgments

This research is funded by the Youth Science Foundation of the National Natural Science Foundation (51309009), the National Natural Science Foundation (91125015), and National Key Research and Development Program (during the 13th Five-Year Plan) under Grant no. 2016YFC0401309, Ministry of Science and Technology, China.

References

[1] IPCC, Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK, 2007.

[2] R. L. Wilby, S. P. Charles, E. Zorita, B. Timbal, P. Whetton, and L. O. Mearns, Guidelines for Use of Climate Scenarios Developed from Statistical Downscaling Methods, UEA, Norwich, UK, 2004.

[3] J. T. Chu, J. Xia, C.-Y. Xu, and V P. Singh, "Statistical downscaling of daily mean temperature, pan evaporation and precipitation for climate change scenarios in Haihe River, China," Theoretical and Applied Climatology, vol. 99, no. 1-2, pp. 149-161, 2010.

[4] R. L. Wilby, L. E. Hay, and G. H. Leavesley, "A comparison of downscaled and raw GCM output: implications for climate change scenarios in the San Juan River Basin, Colorado," Journal of Hydrology, vol. 225, no. 1-2, pp. 67-91, 1999.

[5] M. K. Goyal and C. S. P. Ojha, "Downscaling of surface temperature for lake catchment in an arid region in India using linear multiple regression and neural networks," International Journal of Climatology, vol. 32, no. 4, pp. 552-566, 2012.

[6] R. Huth, "Statistical downscaling in central Europe: evaluation of methods and potential predictors," Climate Research, vol. 13, no. 2, pp. 91-101,1999.

[7] D. Chen and Y. Chen, "Association between winter temperature in China and upper air circulation over East Asia revealed by canonical correlation analysis," Global and Planetary Change, vol. 37, no. 3-4, pp. 315-325, 2003.

[8] P. Coulibaly, Y. B. Dibike, and F. Anctil, "Downscaling precipitation and temperature with temporal neural networks," Journal of Hydrometeorology, vol. 6, no. 4, pp. 483-496, 2005.

[9] A. Anandhi, V. V. Srinivas, D. N. Kumar, and R. S. Nanjundiah, "Role of predictors in downscaling surface temperature to river basin in India for IPCC SRES scenarios using support vector machine," International Journal of Climatology, vol. 29, no. 4, pp. 583-603, 2009.

[10] J. T. Schoof and S. C. Pryor, "Downscaling temperature and precipitation: a comparison of regression-based methods and artificial neural networks," International Journal of Climatology, vol. 21, no. 7, pp. 773-790, 2001.

[11] E. Kostopoulou, C. Giannakopoulos, C. Anagnostopoulou et al., "Simulating maximum and minimum temperature over Greece: a comparison of three downscaling techniques," Theoretical and Applied Climatology, vol. 90, no. 1-2, pp. 65-82, 2007.

[12] D. Duhan and A. Pandey, "Statistical downscaling of temperature using three techniques in the Tons River basin in Central India," Theoretical and Applied Climatology, vol. 121, no. 3-4, pp. 605-622, 2015.

[13] D. Maraun, H. W. Rust, and T. J. Osborn, "The annual cycle of heavy precipitation across the United Kingdom: a model based on extreme value statistics," International Journal of Climatology, vol. 29, no. 12, pp. 1731-1744, 2009.

[14] M. Widmann, "One-dimensional CCA and SVD, and their relationship to regression maps," Journal of Climate, vol. 18, no. 14, pp. 2785-2792, 2005.

[15] M. K. Tippett, T. DelSole, S. J. Mason, and A. G. Barnston, "Regression-based methods for finding coupled patterns," Journal of Climate, vol. 21, no. 17, pp. 4384-4398, 2008.

[16] M. Hessami, P. Gachon, T. B. M. J. Ouarda, and A. StHilaire, "Automated regression-based statistical downscaling tool," Environmental Modelling and Software, vol. 23, no. 6, pp. 813-834, 2008.

[17] Z. Liu, Z. Xu, S. P. Charles, G. Fu, and L. Liu, "Evaluation of two statistical downscaling models for daily precipitation over an arid basin in China," International Journal of Climatology, vol. 31, no. 13, pp. 2006-2020, 2011.

[18] C. Yang, N. Wang, S. Wang, and L. Zhou, "Performance comparison of three predictor selection methods for statistical downscaling of daily precipitation," Theoretical and Applied Climatology, pp. 1-12, 2016.

[19] I. Hanssen-Bauer, E. J. F0rland, J. E. Haugen, and O. E. Tveito, "Temperature and precipitation scenarios for Norway: comparison of results from dynamical and empirical downscaling," Climate Research, vol. 25, no. 1, pp. 15-27, 2003.

[20] A. Hannachi, I. T. Jolliffe, and D. B. Stephenson, "Empirical orthogonal functions and related techniques in atmospheric science: a review," International Journal of Climatology, vol. 27, no. 9, pp. 1119-1152, 2007.

[21] O. Fistikoglu and U. Okkan, "Statistical downscaling of monthly precipitation using NCEP/NCAR reanalysis data for tahtali river basin in Turkey," Journal of Hydrologic Engineering, vol. 16, no. 2, pp. 157-164, 2010.

[22] A. Sharma, "Seasonal to interannual rainfall probabilistic forecasts for improved water supply management: part 1--A strategy for system predictor identification," Journal of Hydrology, vol. 239, no. 1-4, pp. 232-239, 2000.

[23] L. Breiman, "Random forests," Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.

[24] M. Malekipirbazari and V. Aksakalli, "Risk assessment in social lending via random forests," Expert Systems with Applications, vol. 42, no. 10, pp. 4621-4631, 2015.

[25] P. Baudron, F. Alonso-Sarria, J. L. Garcia-Arostegui, F. Canovas-Garcia, D. Martinez-Vicente, and J. Moreno-Brotons, "Identifying the origin of groundwater samples in a multi-layer aquifer system with Random Forest classification," Journal of Hydrology, vol. 499, pp. 303-315, 2013.

[26] K. Tatsumi, Y. Yamashiki, M. A. Canales Torres, and C. L. R. Taipe, "Crop classification of upland fields using Random forest of time-series Landsat 7 ETM+ data," Computers and Electronics in Agriculture, vol. 115, pp. 171-179, 2015.

[27] Z. Wang, C. Lai, X. Chen, B. Yang, S. Zhao, and X. Bai, "Flood hazard risk assessment model based on random forest," Journal of Hydrology, vol. 527, pp. 1130-1141, 2015.

[28] P. O. Gislason, J. A. Benediktsson, and J. R. Sveinsson, "Random forests for land cover classification," Pattern Recognition Letters, vol. 27, no. 4, pp. 294-300, 2006.

[29] M. Pal, "Random forest classifier for remote sensing classification," International Journal of Remote Sensing, vol. 26, no. 1, pp. 217-222, 2005.

[30] V. F. Rodriguez-Galiano, M. Chica-Olmo, F. Abarca-Hernandez, P. M. Atkinson, and C. Jeganathan, "Random Forest classification of Mediterranean land cover using multi-seasonal imagery and multi-seasonal texture," Remote Sensing of Environment, vol. 121, pp. 93-107, 2012.

[31] V. F. Rodriguez-Galiano, B. Ghimire, J. Rogan, M. Chica-Olmo, and J. P. Rigol-Sanchez, "An assessment of the effectiveness of a random forest classifier for land-cover classification," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 67, no. 1, pp. 93-104, 2012.

[32] C. Strobl and A. Zeileis, "Danger: high power!--Exploring the statistical properties of a test for random forest variable importance," Tech. Rep. 17, University of Munich, 2008.

[33] V. Svetnik, A. Liaw, C. Tong, J. Christopher Culberson, R. P. Sheridan, and B. P. Feuston, "Random forest: a classification and regression tool for compound classification and QSAR modeling," Journal of Chemical Information and Computer Sciences, vol. 43, no. 6, pp. 1947-1958, 2003.

[34] E. Eccel, L. Ghielmi, P. Granitto, R. Barbiero, F. Grazzini, and D. Cesari, "Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models," Nonlinear Processes in Geophysics, vol. 14, no. 3, pp. 211-222, 2007.

[35] Pearl River Water Resources Committee (PRWRC), The Zhujiang Archive, vol. 1, Guangdong Science and Technology Press, Guangzhou, China, 1991, (in Chinese).

[36] Q. Zhang, C.-Y. Xu, and Z. Zhang, "Observed changes of drought/wetness episodes in the Pearl River basin, China, using the standardized precipitation index and aridity index," Theoretical and Applied Climatology, vol. 98, no. 1-2, pp. 89-99, 2009.

[37] E. V Dmitriev, I. V. Nogotkov, V. S. Rogutov, G. Komenko, and A. Chavro, "Temporal error estimate for statistical downscaling regional meteorological models," Fisica de la Tierra, vol. 19, pp. 219-241, 2007.

[38] C. Lavaysse, M. Vrac, P. Drobinski, M. Lengaigne, and T. Vischel, "Statistical downscaling of the French Mediterranean climate: assessment for present and projection in an anthropogenic scenario," Natural Hazards and Earth System Science, vol. 12, no. 3, pp. 651-670, 2012.

[39] L. Liu, Z. Liu, X. Ren, T. Fischer, and Y. Xu, "Hydrological impacts of climate change in the Yellow River Basin for the 21st century using hydrological model and statistical downscaling model," Quaternary International, vol. 244, no. 2, pp. 211-220, 2011.

[40] F.-F. Ai, J. Bin, Z.-M. Zhang et al., "Application of random forests to select premium quality vegetable oils by their fatty acid composition," Food Chemistry, vol. 143, pp. 472-478, 2014.

[41] P. Gislason, J. Benediktsson, and J. Sveinsson, "Random forest classification of multisource remote sensing and geographic data," in Proceedings of the Geoscience and Remote Sensing Symposium, vol. 2, pp. 1049-1052, IEEE, Anchorage, Alaska, USA, 2004.

[42] B. C. Hewitson and R. G. Crane, "Climate downscaling: techniques and application," Climate Research, vol. 7, no. 2, pp. 8595, 1996.

[43] B. Timbal, A. Dufour, and B. McAvaney, "An estimate of future climate change for western France using a statistical downscaling technique," Climate Dynamics, vol. 20, no. 7-8, pp. 807-823, 2003.

[44] K. Tatsumi, T. Oizumi, and Y. Yamashiki, "Effects of climate change on daily minimum and maximum temperatures and cloudiness in the Shikoku region: a statistical downscaling model approach," Theoretical and Applied Climatology, vol. 120, no. 1-2, pp. 87-98, 2015.

[45] Z. A. Holden, J. T. Abatzoglou, C. H. Luce, and L. S. Baggett, "Empirical downscaling of daily minimum air temperature at very fine resolutions in complex terrain," Agricultural and Forest Meteorology, vol. 151, no. 8, pp. 1066-1073, 2011.

Bo Pang, (1,2) Jiajia Yue, (1,2) Gang Zhao, (1,2) and Zongxue Xu (1,2)

(1) College of Water Sciences, Beijing Normal University, Beijing 100875, China

(2) Beijing Key Laboratory of Urban Hydrological Cycle and Sponge City Technology, Beijing 100875, China

Correspondence should be addressed to Gang Zhao; gangzhao@mail.bnu.edu.cn

Received 20 December 2016; Revised 25 March 2017; Accepted 17 May 2017; Published 15 June 2017

Academic Editor: Jorge E. Gonzalez

Caption: Figure 1: Study area.

Caption: Figure 2: Observed mean temperature and standard deviation at the meteorological stations.

Caption: Figure 3: Structure of a random forest model.

Caption: Figure 4: Relative significance of the predictors at Guangzhou Station and Nanning Station.

Caption: Figure 5: Rank of relative importance of predictors in the 61 stations.

Caption: Figure 6: MSEs of out-of-bag samples.

Caption: Figure 7: Rank of partial correlation of predictors in the 61 stations.

Caption: Figure 8: Average Nash in calibration and validation period for all models.

Caption: Figure 9: The spatial distribution of Nash for different models.

Table 1: NCEP reanalysis predictors. Number Predictor Description 1 mslp Mean sea level pressure 2 p_f Surface airflow strength 3 p_u Surface zonal velocity 4 p_v Surface meridional velocity 5 p_z Surface vorticity 6 p_th Surface wind direction 7 p_zh Surface divergence 8 p5_f 500 hPa airflow strength 9 p5_u 500 hPa zonal velocity 10 p5_v 500 hPa meridional velocity 11 p5_z 500 hPa vorticity 12 p5th 500 hPa wind direction 13 p5zh 500 hPa divergence 14 p8_f 850 hPa airflow strength 15 p8_u 850 hPa zonal velocity 16 p8_v 850 hPa meridional velocity 17 p8_z 850 hPa vorticity 18 p8th 850 hPa wind direction 19 p8zh 850 hPa divergence 20 p500 500 hPa geopotential height 21 p850 850 hPa geopotential height 22 r500 500 hPa relative humidity 23 r850 850 hPa relative humidity 24 rhum Surface relative humidity 25 shum Surface-specific humidity 26 temp Mean temperature at 2 m height Table 2: Performance assessment for predictands in calibration and validation. Models Periods Nash RMSE MAE R Bias MLR-par Calibration 0.92 1.76 1.35 0.96 0.00 Validation 0.92 1.66 1.29 0.96 0.01 MLR-pca Calibration 0.90 1.96 1.51 0.95 0.00 Validation 0.88 2.00 1.55 0.94 0.31 ANN-par Calibration 0.93 1.56 1.19 0.97 0.00 Validation 0.93 1.52 1.17 0.97 0.07 ANN-pca Calibration 0.92 1.68 1.28 0.96 0.00 Validation 0.91 1.72 1.32 0.96 0.35 SVM-par Calibration 0.92 1.76 1.34 0.96 -0.09 Validation 0.92 1.66 1.28 0.96 -0.06 SVM-pca Calibration 0.89 1.97 1.50 0.95 -0.10 Validation 0.88 1.99 1.53 0.94 0.22 RF Calibration 0.98 0.80 0.58 0.99 0.00 Validation 0.94 1.46 1.12 0.97 0.21

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Pang, Bo; Yue, Jiajia; Zhao, Gang; Xu, Zongxue |

Publication: | Advances in Meteorology |

Article Type: | Report |

Date: | Jan 1, 2017 |

Words: | 5683 |

Previous Article: | Recent Decadal Trend in the North Atlantic Wind Energy Resources. |

Next Article: | Evaluation of the Impact of Argo Data on Ocean Reanalysis in the Pacific Region. |

Topics: |