Printer Friendly

A Deformation Prediction Approach for Supertall Building Using Sensor Monitoring System.

1. Introduction

Affected by its own structural characteristics and external changes, supertall buildings will continue to produce complex deformations such as differential settlement, compression, inclination, deflection, and vibration during the construction process. It is necessary to implement high-precision deformation monitoring and prediction to ensure its construction safety. With the continuous advancement of sensor technology, a lot of progress has been made in obtaining deformation data by installing high-precision sensors on supertall buildings; for example, Su, J.Z. et al. designed a supertall building precision structural performance monitoring system consisting of more than 400 sensors, applied to the structural health monitoring of Shanghai Tower [1]; Chen, W.H. et al. designed a health monitoring system which consists of anemometer, strain gauge, GPS, and other sensors and put it on the Guangzhou TV Tower during the typhoon, aiming at monitoring building health [2]; Ni, Y.Q. et al. used wireless sensors to monitor environmental vibration of the Guangzhou TV Tower during construction [3]; Gu, M. et al. proposed a new method for optimal sensor placement based on the simplified multidegree-of-freedom system for calculating the weak axis modal matrix based on the equivalent stiffness parameter identification method and using the numerical calculation verified the feasibility of the method [4]; Yi, T.H. et al. proposed a modified monkey algorithm for sensor array optimization design of structural health monitoring systems and proved its effectiveness by implementing calculation cases of super high-rise buildings [5].

Using deformation data to predict the deformation of supertall buildings is one of the current research hotspots. Deformation of supertall buildings has strong temporal and spatial linkage and obvious time-varying characteristics. On the concept of time, the monitoring data has a strong dependence on the concept of time slip; and during the different deformation periods, the structural integrity deformation are both random and time-variation, as well as having continuity and periodicity in time; it requires the model to have higher time-varying information extraction capabilities. In the spatial range, the deformation of supertall buildings has a close spatial correlation with the complexity of the environmental factors, spatial characteristics, and change trends. The different changes of environmental factors in different time periods and different fields have different effects on the deformation of supertall buildings. How to deeply mine and extract the feature attributes of environmental factors has always been one of the research difficulties.

Recently, many advances have been made in the prediction of deformation of buildings using neural network technology. In aspect of shallow neural networks [6, 7] such as backpropagation (BP), extreme learning machine (ELM), and support vector machine (SVM), Kang F et al. used the ELM to predict the deformation of a dam and used the prediction accuracy and prediction stability as evaluation indicators to evaluate the prediction results and obtained good experimental results [8]; Xin, J. et al. used the ARIMAGARCH model to predict the deformation of the bridge and achieved good prediction results [9]; Wang, X. et al. used a multiple population genetic algorithm to improve the BP neural network, optimized the network's weight and parameter selection mechanism, and applied it to the deformation prediction of dams [10]; Cao, Y.B. et al. used a new method which integrated the combining genetic algorithm with the artificial neural network to predict the deformation of landslide in some reservoir area of the Three Gorges [11]; Zhang, H. et al. proposed a multiscale deformation prediction model which integrated the genetic algorithm support vector machine (GA-SVM) with the empirical mode decomposition (EMD) and used it to predict the deformation of the dam. By multiscale dam deformation prediction model, BP neural network prediction results were compared with the predicted results demonstrating high accuracy [12].

The deep learning algorithm [13-15] adopts interlayer network training and batch sample grading training methods to solve problems such as overfitting of data analysis and local minima in shallow networks, which improves the training speed of the network. Deep belief networks (DBN) model [16-18] is one of the classic models for deep learning; it has the advantages of fast network training, easy parameter selection, high-efficiency feature extraction capability, and ease of regression analysis. Conditional deep belief network (CDBN) model [19-23] is a variant of DBN model; it inherits many excellent features of the DBN model and adopts the normal distribution to process the resampling of the deformation data of the supertall building. But CDBN model uses the gradient descent method to search the optimal solution during the process of determining weight, and that causes the difference between the predicted value and the actual value in the deformation prediction to be obviously different. The prediction oscillation is more obvious and has a greater impact on the prediction accuracy and stability. So the LM algorithm [24,25] was adopted to replace the gradient descent algorithm to optimize the weighting mechanism of the CDBN model, the information extraction stability of the model and the generalization ability of the nonlinear problem of the time-varying system are improved, the speed of the model convergence is accelerated, and the prediction accuracy and stability of the model are improved. Considering the complexity of the deformation of supertall buildings, the complexity of deformation factors, the deformation characteristics, and the advantages and disadvantages of the model, the LM-CDBN model was applied to the deformation prediction of the CITIC tower, and the prediction accuracy and stability of the proposed model are verified by model comparison experiments and predictive analysis.

2. Brief of CDBN Model

CDBN model is a variant of the traditional DBN model. It inherits many excellent features of the DBN model and resamples the distortion data of the supertall building through normal distribution. The deformation data of supertall building has a high degree of four-dimensional spatial characteristics, of which the temporal and spatial characteristics are particularly prominent. In the process of extracting deformation information from a supertall building, the CDBN model uses an automatic regression mechanism to dynamically mine, extract, and feedback the temporal and spatial characteristics of the deformation data. The model can be used to excavate the dynamic change characteristics of deformation trend from historical data. These features complement and guide the in-depth digging of the current deformation trend. In addition, the automatic regression (AR) adjustment capability also provides convenient conditions for analysis of supertall deformation trend, deformation extrapolation, and trend fitting.

As shown in Figure 1, suppose that there are m visible neurons = ([v.sub.1], [v.sub.2],..., [v.sub.m]) obeying a Gaussian distribution in space and n observable Bernoulli distributions of hidden neurons h = ([h.sub.1], [h.sub.2],..., [h.sub.n]), and [c.sub.j] is the No. j hidden node and is the No. i visible node; A is the weight matrix between the historical data and the target data; B is the weight matrix between the historical data and the hidden node. The system energy function E(v, h) of the CDBN network structure was shown in

E (v -h) = [m.summation over (i=1)] [([v.sub.i] - [b.sub.i]).sup.2]/2[[sigma].sup.2.sub.i] - [m.summation over (i=1)] [n.summation over (j=1)] [v.sub.i]/[[sigma].sub.i] [h.sub.j] [W.sub.i,j] - [n.summation over (j=1)] [c.sub.j] [h.sub.j] (1)

where [v.sub.i] is the No. i visible node; [h.sub.j] is the No. j hidden node; [W.sub.i,j] is the weight matrix between [W.sub.i,j] and [h.sub.j]; [c.sub.j] is the threshold of [h.sub.j]; [b.sub.i] is the threshold of [v.sub.i]; [[sigma].sub.i] is the noise of [v.sub.i]; in order to facilitate model calculation and expansion, [[sigma].sup.2.sub.i] normally sets to 1.

The energy function of CDBN is obtained by (1), and then the conditional probability distribution of the model is derived using

[Mathematical expression not reproducible.] (2)

where P([v.sub.i] | h) is conditional probability distribution of [v.sub.i]; P([h.sub.j] | v) is conditional probability distribution of [h.sub.j]; Sigmod (.) represents an activation function; N (*) represents a Gaussian probability distribution.

The CDBN network model uses the gradient descent algorithm to search for optimal solution. The gradient descent algorithm is the most commonly used optimization algorithm for neural network model training. For the function f(x), [partial derivative]f/[partial derivative]x is the gradient of the function. Its iteration equation is shown in

[Mathematical expression not reproducible.] (3)

where [x.sub.k+1] satisfies the minimum value of f([x.sub.k]); [s.sup.-k] represents the gradient in the descending direction; and [[rho].sub.k] represents the search step in the gradient direction, also meaning learning rate in deep learning.

Finally, training and learning is performed by comparing the divergence sampling method [26-28] to update and set the model parameters, as shown in

[Mathematical expression not reproducible.] (4)

where [DELTA][W.sub.ij], [DELTA][b.sub.i], [DELTA][c.sub.j] is the update value of [W.sub.ij], [b.sub.i], [c.sub.j], [DELTA][A.sup.t-q.sub.k,i] is the weight matrix between [v.sub.k] at time t-q and [v.sub.i] at current time t; [DELTA][B.sup.t-q.sub.k,j] is the weight matrix between [v.sub.k] at time t-q and [h.sub.j] at current time t; [v.sup.t-q.sub.i] is the status value of [v.sub.i] at time t-q; [h.sup.t-q.sub.j] is the Status value of [h.sub.j] at time t-q; [<*>] is the expected output for original data; [<*>.sub.model] is the expected output data calculated by model; [<*>.sub.0] is the initial expectation; [<*>.sub.[infinity]] is the stable expectation; [epsilon] is learning rate.

3. Deformation Prediction Approach

Deformation of supertall buildings is more complex, and the characteristics of spatiotemporal linkage are more obvious. Therefore, the deformation prediction of it needs higher prediction accuracy and prediction stability. The CDBN model has a high ability to extract deformation tendency. But due to the use of a gradient descent algorithm to find the optimal solution during the process of weight determination, the predicted output and the actual deformation output are significantly different, and the predicted oscillation is more obvious. In order to solve this problem, we used the L-M algorithm to model the weight, and the Gauss-Newton algorithm was used to update the weight and threshold of the model to speed up the convergence of the algorithm. The combination of powerful deformation information extraction capability and stable nonlinear optimization ability can help improve the prediction accuracy and stability of supertall buildings.

3.1. LM-CDBN Model Weighting Principle. Assume [W.sub.k] is the matrix vector composed of all the weights and thresholds after the LM-CDBN model iteration k times; then the matrix vector is composed of the weights and thresholds updated after k+1 iteration is [W.sub.k+1], shown as the

[W.sub.k+1] = + [W.sub.k] + [DELTA][W.sub.ij] (5)

The mean squared error (MSE) of the model training is defined as the minimum reference standard.

MSE = 1/N [SIGMA] [([[??].sub.i] - [y.sub.i]).sup.2] (6)

where N represents the sample dimension; E[*] represents the mathematical expectation function; [y.sub.i] and [[??].sub.i] represent the No. i actual prediction value and model value of the prediction label, respectively. Solve the second derivative's extreme value according to the principle of least squares and correct by Gauss-Newton algorithm; then [DELTA][W.sub.ij] is

[DELTA][W.sub.ij] = -[[[J.sup.T] J + [mu]I].sup.-1] Je (7)

where [mu] ([mu] > 0) is a proportional coefficient; e is [t.sup.he] network error vector; I is a unit matrix; J is a Jacobi matrix, shown as

[Mathematical expression not reproducible.] (8)

In the initial stage of model training, the value of p is large, the model will seek the minimum value following the gradient descent method, and each iteration will make p decrease continuously. Then the Gauss-Newton algorithm is used to find the expected value of the target. The second derivative of the algorithm is used. The principle of seeking extreme improves the speed of model training and the ability of nonlinear generalization.

3.2. Flow of Algorithm. The network training and learning process of the LM-CDBN model is constructed as follows and the flow of algorithm was shown in Figure 2.

Step 1 (data preparation phase). The original data is preprocessed (denoised, filtered, normalized, and batched) and the topology of the network is determined.

Step 2. Enter the first batch of data and prepare for network training.

Step 3. Use (9) to update the status of the hidden node of the first layer network.

[S.sub.j] = sigmod ([m.summation over (i)][W.sub.ij][S.sub.i] + N (0,1) (9)

where N(0,1) represents a Gaussian distribution; [W.sub.ij] is the weight matrix connecting the visible layer and the hidden layer; [S.sub.i] is a state value of visible node i.

Step 4. Use (10) to update the status value of the visible node [S.sub.i].

[S.sub.i] = sigmod([n.summation over (j)][W.sub.ij][S.sub.j] + N (0,1) (10)

Step 5. According to the visible layer node state value [S.sub.i] obtained in Step 4, use (11) to update the state of the hidden layer node [S.sub.j] again.

[S.sub.j] = sigmod([m.summation over (i)][W.sub.ij][S.sub.j] + N (0,1) (11)

Step 6. Dynamically update the offsets of the visible and implicit nodes thresholds.

[Mathematical expression not reproducible.] (12)

where [c_star.sub.j]: represents the dynamic offset of the implicit node j; [b_star.sub.i] represents the dynamic offset of the visible node i; [v.sup.t-q.sub.k] is the state value of the visible node k on time t-q; [[alpha].sup.t-q.sub.kj] is the weighted value of directed connections between [v.sub.k] in time t-q and [h.sub.j] on current time [[beta].sup.t-q.sub.k,i] is the weighted value of directed connections between [v.sub.k] in time t-q and [v.sub.i] on current time t.

Step 7. Use the state [S.sub.j] of the hidden layer node of the first layer network as the initial input of the layer 2 network. Repeat Steps 3-6 to complete the pretraining of the layer 2 network until the network is completed layer by layer.

Step 8. Establish a matrix vector of the network weights and thresholds. Use (5) and (7) to update the network weights and thresholds.

Step 9. Enter the second batch of data and go to Step 3 to complete the next round of training and so on until all data processing is completed.

Step 10. Use the function softmax to output the predicted value, denormalize data, and evaluate the network prediction results.

3.3. Evaluation Mechanism. To objectively evaluate the prediction results, the prediction model needs to be evaluated based on full consideration of the prediction error and the accuracy of the prediction value fitting degree. The evaluation mechanism is mainly composed of three aspects: model training error evaluation, fitting degree evaluation, and prediction accuracy.

Taking the three aspects of root mean square error (RMSE), mean absolute error (MAE), mean relative error (MRE) as evaluation indices, as shown in (13). The smaller number of three indicators means the stronger the ability to extract model information, the higher the prediction accuracy.

[Mathematical expression not reproducible.] (13)

where [y.sub.i] and [[??].sub.i] represent the No. i actual prediction value and model value of the prediction label, respectively; N represents the input sample dimension.

R represents the degree of fit between the actual observed value and the predicted output value. If the value of R is very large, it means that the predicted value and the actual observed value are compliant; else it means that the correlation between the two is poor. The equation for R is shown in

R =(1 - [square root of ([SIGMA]([([y.sub.i] - [[??].sub.i]).sup.2]/[SIGMA][y.sup.2.sub.i]) x 100% (14)

Put the prediction results of LM-CDBN model to compare with CDBN model, extreme learning machine (ELM), and UKF-SVR model to evaluate the training error and predictive performance of the LM-CDBN model.

4. Case Study

4.1. Description of the CITIC Tower. The supertall building CITIC tower is in the core area of CBD, Chaoyang District, Beijing, China. The external shape is the overall shape of the "bottle" of Chinese ancient wine containers (Figure 3). The CITIC tower has a total construction area of 350,000 square meters, with a total height of 528 meters, 108 floors above the ground, 7 basement floors, and 5 underground floors in the tower area. The base of the CITIC tower has a square base. From the base to the upper part of the base, its plane size is gradually tightened inward. From the narrowest part of the waistline to the top part, the plane size gradually enlarges. CITIC tower adopts the core-tube megaframe outrigger conversion truss structure, which has features such as high altitude, structural heterogeneity, and large changes in the curvature of the construction curve. The construction uses BIM technology to preassemble the structure and reduce rework and errors in construction.

4.2. Monitoring Sensor Layout. With the continuous development of global navigation satellite system (GNSS) [29, 30] technology, deformation data such as settlement, vertical compression, and horizontal displacement can be obtained with real-time kinematic (RTK) technology performing dynamic deformation observation of supertall buildings. In the study, two GNSS receivers were placed as reference stations outside the walls on the north and east sides of the building, and eight GNSS receivers were installed as monitoring stations on the core barrel and the frame. The GNSS antenna is fixed on the core barrel and the four corners of the frame by special brackets. Each GNSS host is connected to the cable through the cable and transmits the monitoring data to the data center through the data transfer unit (DTU). The specific deployment of base stations and monitoring stations is shown in Figures 4 and 5.

The shape acceleration array (SAA) system [31, 32] is a high-precision sensor based on MEMS accelerometer and consists of several rigid test sections connected by flexible joints. Each test section has a length of 200~500mm and has an internal triaxial accelerometer and thermometer. There is a special section between every 8 test sections, equipped with a microprocessor and a digital temperature sensor. For the SAA laid in the vertical direction, there is a fixed section at the front end of each test sensor for connecting a wired or wireless signal transmission device, and there is a fixed device at the end for fixing the entire SAA, as shown in Figure 6.

Assuming L is the length of the test section and [theta] is the angle between two test sections calculated by the accelerometer, the deformation value [DELTA]t in the direction of the standard section can be obtained. Adding [DELTA]z to each section is the total amount of deformation, shown as

[DELTA]t = [theta] x L (15)

According to the CITIC tower's architectural characteristics, two SAAs are placed in PVC sleeves and embedded along the vertical axis of the outside of the core tube. The ends are fixed on the floor of the structure and the front end is connected to a wireless serial modem (WSM) to achieve long-distance communication.

4.3. Data Processing. As shown in Table 1, from the time of October 10, 2017, the displacement monitoring data of 70 consecutive phases acquired at a sampling frequency of one hour are taken as samples, of which the first 55 training sample sets are used as a priori samples for the network pretraining, and the last 15 samples for the deformation analysis and prediction.

During the construction phase, the core barrel is subject to environmental cross-wind loads, temperature differences between the inside and outside of the shell structure, and changes in light intensity, which are easily subject to dynamic deformation. The training data set consists of core barrel displacement data, temperature, wind speed, light intensity, and time series. During the training of the model, the influence factors such as temperature, wind speed, and light intensity are taken as the characteristic values of the network input layer, and the displacement data of the core cylinder is used as the output feature vector.

In the process of sample priming, gross errors and noise elimination are first performed. Select the first 55 periods of data as a priori samples for network pretraining, and the last 15 samples for deformation analysis and prediction.

Due to the large range of deformation fluctuations and the large magnitude difference between the input factors of each group, the logarithmic interpolation algorithm (as shown in (16) was used to normalize the displacement deformation value [??].

[Mathematical expression not reproducible.] (16)

where [z.sub.max] and [z.sub.min]_min represent the maximum and minimum values of the predicted output deformation; [??] and z represent the normalized and original deformation information.

4.4. Parameter Determination. In the network topology determination process, the precise determination of network depth, the number of hidden layer nodes, and various training parameters are the key to accurate prediction.

In the process of model building, the network depth means the number of network layers of the model. In the process of layer-by-layer network training, reconstruction error (RE) is generated, which is an important indicator to measure network stability. In the case of a certain input layer data pattern, the network reconstruction error is calculated to effectively determine the network depth. As shown in Figure 7, the model gradually increases the network depth during the application process.

As shown in Figure 7(a), when the network depth is 1, the reconstruction error oscillates violently and tends to fall rapidly. The error range is mainly concentrated on 15~28mm, which reflects the lack of network depth. In Figure 7(b), in the early period, the reconstruction error changed dramatically, and it begins to show a slowing trend in the later period. The error range is mainly concentrated on 0.5~1.7mm, and the overall training effect of the network is the best. In Figures 7(c) and 7(d), the network reconstruction error is concentrated on 0.8~3.4mm and 0.7~3.9mm, and the network shows irregular fluctuations and the reconstruction error value has accumulated. Therefore, the network depth is determined to be 2 layers.

According to (17), the number of hidden layer nodes is determined by a comparison test.

l = [square root of ([beta] + y)] + a (17)

where a is the empirical constant and the range is [0,10]; [beta] is the number of input layer nodes; y is the number of output layer nodes; l is the number of hidden layer nodes.

In this prediction of deformation for supertall buildings, take the number of input layer nodes m=4, the number of output layer nodes n=1, and the range of hidden layer nodes is l [member of] [3,13]. As shown in Figure 8 and Table 2, the RMSE, MAE, MRE, and R/% are the evaluation criteria for network training, and the number of optimal hidden layer nodes is obtained through statistical analysis.

When the hidden layer node is 7, the RMSE, MAE, and MRE have the minimum value and the fitting R also has the largest value. Currently, the model has the best deformation prediction ability and nonlinear generalization ability.

4.5. Analysis of Forecast Results. As shown in Table 3, LM-CDBN model, CDBN model, ELM model, and UKF-SVR model were used to predict the 15th period of monitoring data and compared with the deformation value Z. RE and ARE of each group of results were calculated.

Compared with CDBN model, ELM, and improved SVR, LM-CDBN model has higher prediction accuracy and stability, and the model's extrapolation ability is better than other forecast models. The average relative error of the 15 forecast results is 3.12%. At the same time, the relative error of the LM-CDBN model prediction is even and stable. The prediction results of LM-CDBN model are less volatile than UKF-SVR. In addition, compared with the CDBN model, the L-M algorithm is used for optimization and improvement, the generalization ability of the CDBN model is enhanced, and the stability and prediction accuracy of the prediction are improved.

Similarly, as shown in Table 4, through the numerical analysis of several evaluation indicators we can see that the prediction error of LM-CDBN is smaller and the prediction accuracy is much higher than that of shallow neural network. Compared with other models, the deep network has higher feature extraction capabilities and nonlinear regression analysis capabilities.

As shown in Figure 9(a), although gross errors and noise were excluded from the experimental data, due to external influences, the collected data still had large fluctuations, which caused some interference to the actual prediction work. In addition, under the influence of external factors, significant displacement changes have occurred during the continuous monitoring of the overall structure of the Chinese dignity, and the tendency of migration has gradually increased. Compared with other models, the LM-CDBN model has a high degree of fitting ability and deformation extraction ability for the displacement change trend. As shown in Figure 9(b), the LM-CDBN model has better extrapolation capability of deformation and is more in line with the actual law of displacement change.

5. Results

A new deformation prediction approach for supertall building was proposed in the paper. The LM algorithm was used to optimize the weighting method of CDBN model in this approach. Then use this model to predict the deformation of the supertall building CITIC tower and value the perdition results using several different methods. In terms of error, the MAE value of the LM-CDBN model is 0.0023 mm, while the MAE of the CDBN model, the ELM model, and the UKFSVR model was 0.0141 mm, 010262 mm, and 0.0155 mm. The RMSE of LM-CDBN model was 0.0031 mm, while the RMSE of CDBN model, ELM model, and UKF-SVR model was 0.0212 mm, 0.0385 mm, and 0.0223 mm. In terms of fitness, the fitting performance of the LM-CDBN model increased by 64%, 80%, and 64%, compared with the CDBN model, the ELM model, and the UKF-SVR model. By comparing experiments and data analysis, we can find that the LMCDBN model has higher prediction accuracy than three other models, and the variation law of the prediction data is more consistent with the actual variation law. Hence, we can conclude that the LM-CDBN model is suitable for the variable prediction of supertall buildings and also has better robustness and deformation prediction ability.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

There is no conflict of interest regarding the publication of this paper.


This research was supported mainly by the National Key Research and Development Program of China [no. 2017YFB0503700], the Fundamental Research Funds for Beijing University of Civil Engineering and Architecture [no. FZ02], and Beijing University of Civil Engineering and Architecture Postgraduate Innovation Project [no. PG2018054, no. PG2018062].


[1] J.-Z. Su, Y. Xia, L. Chen et al., "Long-term structural performance monitoring system for the Shanghai Tower," Journal of Civil Structural Health Monitoring, vol. 3, no. 3, pp. 49-61, 2013.

[2] W. H. Chen, S. Liang, Z. R. Lu et al., "Monitoring dynamic characteristics for supertall structure during typhoon periods," Journal of vibration and shock, vol. 6, no. 29, pp. 15-72, 2010.

[3] Y. Q. Ni, B. Li, K. H. Lam et al., "In-construction vibration monitoring of a supertall structure using a long-range wireless sensing system," Smart Structures and Systems, vol. 7, no. 2, pp. 83-102, 2011.

[4] T.-H. Yi, H.-N. Li, and M. Gu, "A new method for optimal selection of sensor location on a high-rise building using simplified finite element model," Structural Engineering and Mechanics, vol. 37, no. 6, pp. 671-684, 2011.

[5] T.-H. Yi, H.-N. Li, and X.-D. Zhang, "A modified monkey algorithm for optimal sensor placement in structural health monitoring," Smart Materials and Structures, vol. 21, no. 10, pp. 52-53, 2012.

[6] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," The American Association for the Advancement of Science: Science, vol. 313, no. 5786, pp. 504-507, 2006.

[7] M. Bianchini and F. Scarselli, "On the complexity of neural network classifiers: A comparison between shallow and deep architectures," IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 8, pp. 1553-1565, 2014.

[8] F. Kang, J. Liu, J. Li, and S. Li, "Concrete dam deformation prediction model for health monitoring based on extreme learning machine," Structural Control and Health Monitoring, vol. 24, no. 10, Article ID e.1997, 2017.

[9] J. Xin, J. Zhou, S. Yang, X. Li, and Y. Wang, "Bridge structure deformation prediction based on gnss data using Kalman-ARIMA-GARCH model," Sensors, vol. 18, no. 1, p. 298, 2018.

[10] X. Wang, K. Yang, and C. Shen, "Study on MPGA-BP of gravity dam deformation prediction," Mathematical Problems in Engineering, vol. 2017, Article ID 2586107, 13 pages, 2017

[11] Y.-B. Cao, E.-C. Yan, and L.-F. Xie, "Study of landslide deformation prediction based on gray model-evolutionary neural network model considering function of environmental variables," Yantu Lixue/Rock and Soil Mechanics, vol. 33, no. 3, pp. 848-852, 2012.

[12] H. Zhang and S. Xu, "Multi-scale dam deformation prediction based on empirical mode decomposition and genetic algorithm for support vector machines (GA-SVM)," Yanshilixue Yu Gongcheng Xuebao/Chinese Journal of Rock Mechanics and Engineering, vol. 30, no. 2, pp. 3681-3688, 2011.

[13] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015.

[14] J. Schmidhuber, "Deep learning in neural networks: an overview," Neural Networks, vol. 61, pp. 85-117, 2015.

[15] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. D. Freitas, "Predicting parameters in deep learning," in Advances in Neural Information Processing Systems, pp. 2148-2156, Lake Tahoe, Calif, USA, 2013.

[16] M. M. Lau and K. H. Lim, "Investigation of activation functions in deep belief network," in Proceedings of the 2nd International Conference on Control and Robotics Engineering (ICCRE), pp. 201-206, Bangkok, Thailand, 2017.

[17] Y. Q. Neo, T. T. Teo, W. L. Woo, T. Logenthiran, and A. Sharma, "Forecasting of photovoltaic power using deep belief network," in Proceedings of the Region 10 Conference TENCON '17, pp. 1189-1194, Penang, Malaysia, 2017

[18] G. E. Hinton, S. Osindero, and Y. Teh, "A fast learning algorithm for deep belief nets," Neural Computation, vol. 18, no. 7, pp. 1527-1554, 2006.

[19] G. W. Taylor, G. E. Hinton, and S. T. Roweis, "Modeling human motion using binary latent variables," in Proceedings of the Advances in Neural Information Processing Systems, pp. 1345-1352, Vancouver, Canada, 2007.

[20] V. Mnih, H. Larochelle, and E. G. Hinton, "Conditional restricted Boltzmann machines for structured output prediction," in Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pp. 514-522, Barcelona, Spain, 2011.

[21] A. Spiliopoulou, Investigation of Deep CRBM Networks in modeling Sequential Data, Master Thesis, University of Edinburgh, Edinburgh, Scotland, 2008.

[22] S. Chen, H. Bou Ammar, K. Tuyls, and G. Weiss, "Conditional restricted Boltzmann machines for negotiations in highly competitive and complex domains," in Proceedings of the 23rd International Joint Conference on Artificial Intelligence, IJCAI '13, pp. 69-75, China, 2013.

[23] H. Sheng, Gao. Z. Q., W. Wei et al., "Improved deep belief network model and its application in named entity recognition of Chinese electronic medical records," in Proceedings of the 3rd IEEE International Conference on Big Data Analysis, pp. 356-360, Shanghai, China, 2018.

[24] J. J. More, "The Levenberg-Marquardt algorithm: implementation and theory," in Numerical Analysis, pp. 105-116, Springer, Berlin, Germany, 1978.

[25] J. H. Li, W. X Zheng, J. P Gu et al., "Parameter estimation algorithms for Hammerstein output error systems using Levenberg-Marquardt optimization method with varying interval measurements," Journal of The Franklin Institute, vol. 354, pp. 316-331, 2016.

[26] X. L. Zhou, The trend prediction of red tide biomass in Zhejiang coastal ship-data based on deep learning, Master Thesis, Zhejiang University, Zhejiang, China, 2016.

[27] X. Zhou, F. Zhang, Z. Du, M. Cao, and R. Liu, "A study on time series prediction model based on CRBM algorithm," Journal of Zhejiang University, Science Edition, vol. 43, no. 4, pp. 442-451, 2016.

[28] G. W. Taylor, G. E. Hinton, and S. T. Roweis, "Modeling human motion using binary latent variables," in Proceedings of the International Conference on Neural Information Processing Systems, pp. 1345-1352, Vancouver, Canada, 2007

[29] W. B. Hoffman, H. Lichtenegger, and E. Wasle, Introduction. GNSS-Global Navigation Satellite Systems. GPS, GLONASS, Galileo and More, Springer, Berlin, Germany, 2008.

[30] J. J. H. Wang, "Antennas for global navigation satellite system (GNSS)," Proceedings of the IEEE, vol. 100, no. 7, pp. 2349-2355, 2012.

[31] R. Lipscombe, C. Carter, O. Perkins et al., "The use of shape accel arrays (SAA) for measuring retaining wall deflection," in Crossrail Project: Infrastructure Design and Construction, pp. 239-252, ICE Publishing, London, UK, 2015.

[32] V. Bennett, M. Zeghal, T. Abdoun, and L. Danisch, "Wireless shape-acceleration array system for local identification of soil and soil structure systems," Transportation Research Record, no. 2004, pp. 60-66, 2007.

Dongwei Qiu [ID], (1,2) Tong Wang [ID], (1) Qing Ye, (3) He Huang [ID], (1) Laiyang Wang [ID], (1) Mingxu Duan, (1) and Dean Luo (1)

(1) School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 100044, China

(2) Key Laboratory for Urban Geomatics of National Administration of Surveying, Mapping and Geoinformation, Beijing 100044, China

(3) School of Humanity and Law, Beijing University of Civil Engineering and Architecture, Beijing 100044, China

Correspondence should be addressed to Dongwei Qiu; and He Huang; Received 22 January 2019; Accepted 8 April 2019; Published 24 April 2019

Guest Editor: Sang-Hoon Hong

Caption: Figure 1: Structure diagram of single layer of CDBN model, consisting of historical data, target data, and hidden nods.

Caption: Figure 2: Flowchart of LM-CDBN algorithm.

Caption: Figure 3: Concept and structure figure of CITIC tower. (a) CITIC tower concept design. (b)CITIC tower schematic.

Caption: Figure 4: GNSS receiver placement. (a) Location figure of reference stations and observation stations. (b) Receiver antenna figure.

Caption: Figure 5: GNSS deformation monitoring system diagram consists of GNSS receiver, DTU, control center, and reference station.

Caption: Figure 6: Shape acceleration array system, consisting of standard test section, flexible joint, and special section.

Caption: Figure 7: Reconstruction error of hidden layer. (a) Number of hidden layer is 1. (b) Number of hidden layer is 2. (c) Number of hidden layer is 3. (d) Number of hidden layer is 4.

Caption: Figure 8: Statistical diagram of the number of hidden layer's nodes. (a) Red line with respect to the RMSE, blue line with respect to the MAE, and yellow line with respect to the MRE. (b) Blue line with respect to the R.

Caption: Figure 9: Results of fitting and prediction. (a) Fitting result comparison. (b) Forecast result comparison.
Table 1: Monitoring data of CITIC office building.

Serial          Time         Z [mm]     Temperature
number                                  [[degrees]C]

1         2017-10-11 0:00    28.281         14.1
2         2017-10-11 1:00    27.556         14.3
3         2017-10-11 2:00    28.428         14.2
4         2017-10-11 3:00    29.779         14.9
5         2017-10-11 4:00    30.250         14.8
6         2017-10-11 5:00    27.895         14.1
--               --            --            --
69        2017-10-13 21:00   49.753         11.9
70        2017-10-13 22:00   49.341         11.6

Serial      Wind speed         Light
number    [mx[s.sup.-1]]   intensity [Lx]

1              5.8             2.732
2              5.4             2.452
3              5.5             2.543
4              5.4             2.654
5              4.9             2.687
6              5.2             2.754
--              --               --
69             6.7             2.543
70             7.5             2.654

Table 2: Statistical table of the number of hidden
layer's nodes.

layers   RMSE/mm   MAE/mm    MRE      R/%

2        0.0369    0.0167   0.0237   93.52
3        0.0400    0.0213   0.0316   92.98
4        0.0665    0.0342   0.0500   88.31
5        0.0395    0.0214   0.0317   93.07
6        0.0285    0.0117   0.0162   95.00
7        0.0113    0.0052   0.0075   98.01
8        0.0450    0.0236   0.0348   92.10
9        0.0225    0.0091   0.0125   96.05
10       0.0330    0.0135   0.0187   94.20
11       0.0162    0.0066   0.0091   97.16
12       0.0325    0.0159   0.0231   94.29

Table 3: Comparison table of prediction.

date          Z [mm]    CDBN [mm]   RE [%]   LM-CDBN [mm]   RE [%]

10-13 08:00   36.3700    35.6869     1.88      36.2744       0.26
10-13 09:00   35.9685    34.7454     3.40      35.8914       0.21
10-13 10:00   34.1020    33.0588     3.06      34.0540       0.14
10-13 11:00   34.2450    32.6750     4.58      34.1812       0.19
10-13 12:00   39.0745    38.2927     2.00      38.8679       0.53
10-13 13:00   37.1475    35.9385     3.25      37.0167       0.35
10-13 14:00   37.7950    36.8189     2.58      37.6317       0.43
10-13 15:00   41.6840    39.6342     4.92      41.2798       0.97
10-13 16:00   39.1225    38.3562     1.96      38.8977       0.57
10-13 17:00   40.6580    39.2225     3.53      40.2930       0.90
10-13 18:00   47.0815    44.5459     5.39      45.8757       2.56
10-13 19:00   46.3970    44.8205     3.40      45.3267       2.31
10-13 20:00   47.3470    45.7168     3.44      46.0723       2.69
10-13 21:00   51.0815    47.3883     7.23      48.8165       4.43
10-13 22:00   49.5470    47.0194     5.10      47.7546       3.62
Mean Relative                        3.72         --         1.34
  Error (MRE)[%]

date          Z [mm]    ELM [mm]   RE [%]   UKF-SVR [mm]   RE [%]

10-13 08:00   36.3700   34.0753     6.31      36.9076       1.48
10-13 09:00   35.9685   38.9832     8.38      36.8897       2.56
10-13 10:00   34.1020   38.0791    11.66      33.5363       1.66
10-13 11:00   34.2450   34.6637     1.22      30.5105      10.91
10-13 12:00   39.0745   35.8950     8.14      39.1214       0.12
10-13 13:00   37.1475   39.5860     6.56      38.7017       4.18
10-13 14:00   37.7950   36.9923     2.12      34.8703       7.74
10-13 15:00   41.6840   40.3191     3.27      43.4704       4.29
10-13 16:00   39.1225   43.4842    11.15      41.1197       5.10
10-13 17:00   40.6580   44.4703     9.38      37.4763       7.83
10-13 18:00   47.0815   40.8088    13.32      48.4904       2.99
10-13 19:00   46.3970   44.6929     3.67      47.7502       2.92
10-13 20:00   47.3470   48.1102     1.61      47.8773       1.12
10-13 21:00   51.0815   45.8864    10.17      52.2852       2.36
10-13 22:00   49.5470   47.7797     3.57      49.2904       0.52
Mean Relative              --       6.70         --         3.72
  Error (MRE)[%]

Table 4: Comparison of the result of prediction evaluation.

Evaluation    CDBN    LM-CDBN    ELM     UKF-SVR

MRE          0.0296   0.0060    0.0487   0.0283
MAE [mm]     0.0141   0.0023    0.0262   0.0155
RMSE [mm]    0.0212   0.0031    0.0385   0.0223
R [%]         94.9     98.9      91.3     95.1
COPYRIGHT 2019 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Qiu, Dongwei; Wang, Tong; Ye, Qing; Huang, He; Wang, Laiyang; Duan, Mingxu; Luo, Dean
Publication:Journal of Sensors
Geographic Code:9CHIN
Date:May 31, 2019
Previous Article:Electrical Impedance-Based Methodology for Locating Carcinoma Emulators on Breast Models.
Next Article:Response Bandwidth Design of Fabry-Perot Sensors for Partial Discharge Detection Based on Frequency Analysis.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |