Printer Friendly

Modeling of local scour depth around bridge piers using artificial neural network.

INTRODUCTION

Bridges are one of the principle components of the transportation systems and their failure will result in economic losses as well as human life threat, therefore, there is need to protect them by continuing maintenance through proposing the required repair procedures. Bridges might fail due to three main reasons: collision, excessive loading and scour. Bridge scour has been reported all over the world as the most common factor for bridges failure, particularly in countries that are subject to floods induced by annual typhoons.

Arneson et al. [1] suggested that, the total scour at bridges can be divided into long-term degradation of the river bed, contraction scour at the bridge and local scour at the piers or abutments. Local scour can be defined as the removal of materials from around piers, abutments, spurs, and embankments. It is caused by an acceleration of flow and resulting vortices induced by obstructions to the flow.

The presence of the bridge piers in the river will alter flow patterns in the vicinity of the piers results in an increase in the sediments movement causing the phenomenon of scour. To avoid a failure of the bridges, the foundations depth (piers and abutments) should be deeper than the maximum scour depth in its life time, and the old bridges should be checked from time to time to evaluate the maximum scour depth around the bridge foundation to avoid bridge collapse.

Over the past decades many researchers studying the local scour ([d.sub.s]) at bridge piers and variety predictive formulas was developed based on laboratory and field observations, such as Laursen and Toch [2], Jain and Fischer [3], Melville [4], Rui et al. and many other researchers, as shown in Table (1).

In the recent years, the application of Artificial Neural Networks (ANNs) is proposed to predict the local scour depth as an alternative to the predictive formulas, Kambekar and Deo [14] used ANNs to predict the scour depth as well as scour width for group of piles. Lee et al. [15] developed ANN model with five inputs in normalized form to predict the local scour depth around bridge pier, the measured data of thirteen states in USA used to test the performance of the ANN model. Bateni et al. [16] showed that, ANN model with multi-layer perceptron using back-propagation algorithm (MLP/BP) provides a better prediction of scour depth than radial basis using orthogonal least-squares algorithm (RBF/OLS) and adaptive neuro-fuzzy inference system (ANFIS). Kaya [17] investigated different input variables with various ANNs models, the sensitivity analysis indicated that pier scour depth can be estimated using four variables: pier shape, pier skew, flow depth and flow velocity.

Where, a = Pier width, D = Pier diameter, [d.sub.S] = Maximum local scour depth, [d.sub.50] = Mean sediment size, Fr = Froude number, [Fr.sub.c] = Critical Froude number, [F.sub.d50] = Densimetric Froude number, g = Gravitational acceleration, [K.sub.1] = Correction factor for pier nose shape, [K.sub.2] = Correction factor for the angle of attack of the flow, [K.sub.3] = Correction factor for bed conditions, [K.sub.4] = Correction factor for armoring by bed material size, [K.sub.d] = Sediment size factor, [K.sub.G] = Factor of channel geometry effect, [K.aub.h] = Shallowness factor, [K.sub.l] = Flow intensity factor, Ks = Pier shape factor, [K.sub.yb] = Flow depth-pier size factor, [K.sub.[theta]] = Pier alignment factor, Re = Reynolds number for the pier, V = Flow velocity, [V.sub.c] = Critical flow velocity, y = Flow depth, [DELTA]g = Reduced gravitational acceleration, [rho] = Water density, [[rho].sub.s] = Sand density, [mu] = Dynamic viscosity of the water.

In this research, feed-forward neural network with back-propagation algorithm will be used to predict the maximum local scour depth around single cylindrical bridge pier under clear water conditions and comparison of the results with twelve of the most common predictive formulas, listed in Table (1).

Experimental Work:

Experimental measurements were conducted at the university of Basrah, college of engineering, to analyze and observe the local scour around bridge piers experimentally. All laboratory experiments were conducted under clear water conditions. Flume with a total length of 5.72 m, width 0.615 m and 0.2 m height was used in the experiments. At the entrance of the flume there is a mesh screen to establish steady flow conditions. Discharge was measured by sharp crested rectangular weir. Depth of flow was controlled by an adjustable tail gate at end of the flume and measured by point gauge ([+ or -] 0.1 mm accuracy).

Uniform sand with [d.sub.50] 0 .348 mm used as a bed sediments. Single vertical cylindrical piers were made of wood used in the experiments, place in the middle sand area. before each experiment, the sand bed is perfectly leveled, Then the flume is filled with water gradually and the pump starts with low velocities until the desired value is reached. At the end of each run the flume is drainage and the scour depth is measured with a point gauge. The experimental data is presented in Table (2).

Artificial Neural Network:

Artificial neural network is type of artificial intelligence (computer system) that attempt to simulate and mimic the way of the human brain in processing and storage information. ANN composed of collection of interconnected processing elements called neurons or nodes, it works by creating connections between the nodes and the strength of these connections called weights. neurons grouped in layers and most of ANN models consist of three or more layers (input layer, hidden layers, output layer) as shown in Figure 1, The ANN system learns by determine the appropriate number of neurons in the hidden layer or hidden layers and adjusting the weights of the connections based upon the training data. Trial and error are the best way to determine the appropriate number of hidden neurons and the number of the hidden layers [18].

[FIGURE 1 OMITTED]

Where, [W.sub.ji] is the weight of the connection between the ith input layer neuron and jth hidden layer neuron, [W.sub.Kj] is the weight of the connection between the jth hidden layer neuron and the kth output layer neuron.

The input data is first fed directly to the network through the input layer, and subsequently to the hidden layer to produce an expected result through the output layer. each node multiplies every input by corresponding weight and sums them together in addition to the bias to form the net input to the neuron, and then passes the net input through the transfer function to produce the node output. The transfer function for the hidden nodes is usually a sigmoid transfer function.

The ANN are trained with a set of input and known output data, and the procedure to know the performance of the network is based on the mean square error (mse) and the regression value (R), they can be calculated as below [19,20]:

mse = 1/n [[summation].sup.n.sub.k=1]] [([T.sub.k] - [O.sub.k]).sup.2] (1)

R = [[summation].sup.n.sub.k=1]] ([T.sub.k] - [bar.T]) ([O.sub.k] - [bar.O])/(n-1)[S.sub.T][S.sub.O] (2)

[S.sub.T] = [square root of ([1/n-1] [[summation].sup.n.sub.k=1] ([T.sub.k] - [bar.T]))] (3)

[S.sub.O] = [square root of ([1/n-1] [[summation].sup.n.sub.k=1] ([O.sub.k] - [bar.O]))] (4)

[bar.T] = 1/n [[summation].sup.n.sub.k=1] [T.sub.k] (5)

[bar.O] = 1/n [[summation].sup.n.sub.k=1] [O.sub.k] (6)

Where, [T.sub.k] is the actual target, [O.sub.k] is the network output, n is the number of data, [bar.T] is the mean value of the targets, [bar.O] is the mean value of the network output.

One is the best condition for the regression value and Zero is the best condition for the mean square error. After training the network and get best training performance it should test the network with new data never been presented in the training data and in the range of the training data. Sometimes neural network give a perfect performance for the training data but it fails to produce a good results when applied to a new examples (over fitting), [21]. Therefore, it is necessary to test the network and check if it memorizing the relation between the inputs and outputs when applying to a new data in the future. And the network with best testing performance will be choose as the proposed network.

RESULTS AND DISCUSSIONS

A. Experimental Results:

The laboratory experiments addressed three cases, the effect of pier size, flow velocity and flow depth on the local scour, as show in Figures (2, 3 and 4) respectively. It is found that the larger pier diameter gives deeper local scour upstream of the pier. This is because the strength of the horseshoe vortex which is proportional to the diameter of the pier. Flow velocity increment leads to increase the flow intensity under the same conditions of flow depth and pier diameter, in turn, this will lead to more scour depth as velocity is increased under clear water conditions. Flow depth has a proportional effect on the local scour depth. The results have shown that the local scour depth increases as the depth of flow increases under the range of the flow depth during the experiments. this conforms with the previous researches in that: the scour depth is proportional to flow depth up to limiting value where this effect is vanished.

[FIGURE 2 OMITTED]

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

B. Artificial Neural Network Results:

Feed-forward neural network with back-propagation algorithm was used in this research to predict the maximum local scour depth around bridge pier. Trial and error process was used to configure the neural networks parameter such as the training functions, number of the hidden layers and the number of the neurons in the hidden layers. Logsig transfer function was used in the hidden layer(s) and Purelin transfer function in the output layer. The network was trained with laboratory data from previous researchers shown in Table 3. The laboratory data of Yanmaz and Altinbilek [22] was used to test the network performance. Table 4, shows the input and output variables for the training and testing and the range of each one of them.

ANN was trained and tested with one and two hidden layers with different number of nodes (1-20) in each hidden layer, as shown in Table (5). Several training functions was examined to reach the best approximations.

As can be seen in Table 5, (trainlm) training function gave the best testing performance with one and two hidden layers. There is no big difference between the results but using two hidden layers gave the best performance with mse = 0.26916 x [10.sup.-4] and R = 0.96873, therefore, it can be chosen as the proposed network to predict the local scour depth. Figures 5 and 6, show the regression and mse of the proposed network respectively. Table 6, shows the specifications of the proposed network.

[FIGURE 5 OMITTED]

[FIGURE 6 OMITTED]

C. Importance of the Input Variables:

Artificial neural network can be used to find the significant input variables that have the most effect on the scouring process and the prediction of the neural network. Test runs were conducted without containing a particular one input variable among the four inputs. The results are shown in Table (7). It is showed that the pier diameter has the most effect on the local scour, followed by the flow velocity.

D. Comparison with Previous Formulas:

Experimental data of this study in addition to the test data of yanmaz and altinbilek [22] were applied to the proposed neural network and the previous formulas in Table 1, to show their performance. Figure (7) shows the performance of ANN and the predictive formulas.

[FIGURE 7 OMITTED]

From Figure 7, it is found that the ANN model gave the best approximation to the actual values from the previous formulas with mse = 0.216239 x [10.sup.-4]. Also, it can be seen that, shen et al. [6] and CSU [10] formulas gave a good approximation among the twelve formulas.

Conclusions:

In this paper, the application of Artificial Neural Network is used to predict the maximum local scour depth at cylindrical bridge piers. it is found that Feed-forward neural network with back-propagation algorithm has proved to be a good tool for predicting the local scour depth at bridge piers and much more accurate than the predictive formulas used in this study. By making the sensitivity analysis to the input variables, it is found that pier diameter has the significant effect on the local scour depth prediction followed by flow velocity. Form Figure 7, it can be found that Shen et al. [6] and CSU [10] formulas are the best among the twelve formulas. The laboratory experiments show that both pier diameter and flow velocity are directly proportional with the local scour depth. Also, it is shown that the local scour depth is increased as the flow depth increased under the limitation of the experiments.

REFERENCES

[1.] Arneson, L.A., L.W. Zevenbergen, P.F. Lagasse, P.E. Clopper, 2012. Evaluating Scour At Bridges, hydraulic engineering circular No. 18. 5th Edition, FHWA-HIF-12-003.

[2.] Laursen, E.M. and A. Toch, 1956. Scour Around Bridge Piers and Abutments. Iowa Highway Research Board, Bulletin No. 4.

[3.] Jain, S.C. and E.E. Fischer, 1979. Scour Around Circular Bridge Piers at High Froude Numbers. Iowa Institute of Hydraulic Research, Report, 220.

[4.] Melville, B.W., 1997. Pier And Abutment Scour : Integrated Approach. Journal of Hydraulic Engineering, 123(2): 125-136.

[5.] Langa, R.M., C.S. Fael, R.J. Maia, J.P Pego and A.H. Cardoso, 2013. Clear-Water Scour at Comparatively Large Cylindrical Piers. Journal of Hydraulic Engineering, 139(11): 1117-1125.

[6.] Shen, H.W., V.R. Schneider and S. Karaki, 1969. Local Scour Around Bridge Piers. Journal of the Hydraulics Division 95(HY5): 1919-1940.

[7.] Hancu, S., 1971. Sur le calcul des affouillementslocaux dams la zone des piles des ponts. Proceedings of the 14th IAHR Congress, 3: 299-313, Paris, France.

[8.] Neill, C.R., 1973. Guide to Bridge Hydraulics. Roads and Transportation Assoc. of Canada, University of Toronto Press, Toronto, Canada.

[9.] Breusers, H.N.C., G. Nicollet and H.W. Shen, 1977. Local Scour Around Cylindrical piers. Journal of Hydraulic Research, 15(3): 211-252.

[10.] Richardson, E.V. and S.R. Davis, 1995. Evaluating Scour At Bridges. Federal Highway Administration, Hydraulic Engineering Circular No. 18, 3rd Edition.

[11.] Maatooq, J.S., 1999. Evaluation, Analysis And New Concepts Of Scour Process Around Bridge Piers. PhD Thesis, University of Technology, iraq.

[12.] Sheppard, D.M. and W. Miller, 2006. Live-Bed Local Pier Scour Experiments. Journal of Hydraulic Engineering, 132(7): 635-642.

[13.] Khwairakpam, P. and A. Mazumdar, 2009. Local Scour Around Hydraulic Structures. International Journal of Recent Trends in Engineering, 1(6): 59-61.

[14.] Kambekar, A.R. and M.C. Deo, 2003. Estimation of Pile Group Scour Using Neural Networks. Journal of Applied Ocean Research, 25: 225-234.

[15.] Lee, T.L., D.S. Jeng, G.H. Zhang and J.H. Hong, 2007. Neural Network Modeling for Estimation of Scour Depth Around Bridge Piers., 19(3): 378-386.

[16.] Betani, S.M., S.M. Borghei and D.S. Jeng, 2007. Neural Network and Neuro-Fuzzy Assessments for Scour Depth Around Bridge Piers. Engineering Applications of Artificial Intelligence, 20: 401-414.

[17.] Kaya, A., 2010. Artificial Neural Network Study of Observed Pattern of Scour Depth Around Bridge Piers. Computers and Geotechnics, 37: 413-418.

[18.] Taylor, J.B., 2006. Methods and Procedures for Verification and Validation of Artificial Neural Networks. Springer, ISBN-13: 978-0-387-28288-6.

[19.] Hagan, M., M. Beale, H. Demuth, 2009. Neural Network ToolboxTM User's Guide. the Math Works, Inc, 6th edition, ISBN: 0-9717321-0-8.

[20.] Hagan, M.T., H.B. Demuth, M.H. Beale and O.D. Jesus, 2002. Neural Network Design. ISBN: 978-0971732-1-7, 2nd Edition.

[21.] Priddy, K.L. and P.E. Keller, 2005. Artificial Neural Networks : An Introduction. SPIE-The International Society for optical Engineering, ISBN: 0-8194-5987-9.

[22.] Yanmaz, A.M. and H.D. Altinbilek, 1991. Study of Time-Dependent Local Scour Around Bridge Piers. Journal of Hydraulic Engineering, 117(10): 1247-1268.

[23.] Mia, M.F. and H. Nago, 2003. Design Method of Time-Dependent Local Scour at Circular Bridge Pier. Journal of Hydraulic Engineering, 129(6): 420-427.

[24.] Dey, S., S.K. Bose and L.N. Sastry, 1995. Clear Water Scour at Circular Piers: A Model. Journal of Hydraulic Engineering, 121(12): 869-876.

(1) Saleh I. Khassaf and (2) Ali Q. Abdulwhab

(1) Professor, University of Basrah Department of Civil Engineering, Basrah, Iraq.

(2) Graduate Research Student, University of Basrah, Department of Civil Engineering, Basrah, Iraq.

Received 15 May 2016; Accepted 7 July 2016; Available 22 July 2016

Address For Correspondence:

Saleh I. Khassaf, Professor, University of Basrah, Department of Civil Engineering, Basrah, Iraq.

E-mail: alialshahad@gmail.com
Table 1: Scour Depth Formulas Proposed from Previous Studies.

Author                        Formula

Laursen and Toch (1956)       [d.sub.s] = 1.35 [a.sup.0.7]
                              [[gamma].sup.0.3]

Shen et al. (1969)            [d.sub.s] = 0.00022 [Re.sup.0619],
                              Re = [rho] V D/[mu]

Hancu (1971)                  [d.sub.s]/ = 2.42 (2V/[V.sub.c] - 1)
                              [([V.sup.2]/ga).sup.1/3]

Neil (1973)                   [d.sub.s] = [K.sub.s] a

Bresusers et al. (1977)       [d.sub.s]/a = (2V/[V.sub.c] - 1)
                              (2 tanh [gamma]/a)

Jain and Fischer (1979)       [d.sub.s]/a = 1.84 [Fr.sup.0.25.sub.c]
                              [([gamma]/a).sup.0.3, [Fr.sub.c] =
                              [V.sub.c]/[square root of g [gamma]]

CSU (Richardson and           [d.sub.s]/[gamma] = 2 [K.sub.1]
Davis 1995)                   [K.sub.2] [K.sub.3] [K.sub.4]
                              [(a/[gamma]).sup.0.65 [Fr.sup.0.43],
                              Fr = v/[square root of g [gamma]]

Melville (1997)               [d.sub.s] = [K.sub.[gamma]b] [K.sub.I]
                              [K.sub.d] [K.sub.s] [K.sub.[theta]]
                              [K.sub.G]

Maatooq (1999)                [d.sub.s]/a = 0.519 + 2.5
                              (V/[V.sub.c] - 0.57) [gamma]/a

Sheppard and Miller (2006)    [d.sub.s]/a = 2.5 [f.sub.1][f.sub.2]
                              (1 - 1.75 [(ln V/[V.sub.c]).sup.2])
                              [f.sub.1] = tanh [([gamma]/a).sup.0.4],
                              [f.sub.2] = a/[d.sub.50]/0.4
                              [(a/[d.sub.50]).sup.1.2] +10.6
                              [(a/[d.sub.50]).sup.-0.13)]

Khwairakpam et al. (2012)     [d.sub.s]/a = (0.744
                              ([gamma]/a) - 0.367) [F.sub.d50] +
                              (-2.38 ([gamma]/a) + 2.683)
                              [F.sub.d50] = V/[square root of
                              [DELTA]g [d.sub.50]],
                              [DELTA]g = [rho]/[rho]s - 1

Rui et al. (2013)             [d.sub.s]/a] = [K.sub.h] [K.sub.d]
                              [K.sub.I]

Table 2: Experimental Data.

          [d.sub.50]   D      V        [gamma]   [d.sub.s]
Run No.   mm           mm     m/s      mm        mm

1         0.348        19     0.172    45        26.3
2         0.348        24.4   0.172    45        36.6
3         0.348        35.2   0.172    45        42.2
4         0.348        40.5   0.172    45        47
5         0.348        49     0.172    45        53.4
6         0.348        24.4   0.141    40        24
7         0.348        24.4   0.16     40        30
8         0.348        24.4   0.18     40        34.8
9         0.348        24.4   0.2      40        42
10        0.348        24.4   0.2162   40        46
11        0.348        49     0.1768   35        47.5
12        0.348        49     0.1768   40        51
13        0.348        49     0.1768   44        53
14        0.348        49     0.1768   48        57
15        0.348        49     0.1768   51        61

Table 3: Training Data

The Researcher            Number of Data set

Chabert and Engeldinger   12
Dey et al. [24]           18
Maatooq J.S.              82
Mia and Nago              5

Table 4: Training and Testing Variables and the Range of them

Item        Variables         Range of Data

                              Training       Testing

Input       [d.sub.50] (cm)   0.026-0.3      0.084-0.107
variables
            D (m)             0.01-0.15      0.047-0.067

            V (m/s)           0.128-0.522    0.166-0.362

            [gamma] (m)       0.02-0.35      0.045-0.165

Output      [d.sub.s] (m)     0.0113-0.175   0.032-0.107
variables

Table 5: ANN Performance with One Hidden Layer and Two Hidden Layers

           One Hidden Layer

Training   Nodes   mse (test)      R
function   No.     x [10.sup.-4]   (test)    Epoch

trainlm    19      0.29363         0.95924   17
trainrp    9       0.37733         0.94761   100
traingda   19      0.52304         0.94109   669
traingdx   19      0.45525         0.93719   316
traincgf   3       0.39637         0.94624   79
traincgp   19      0.55217         0.92923   8
traincgb   16      0.41094         0.95476   137
trainscg   3       0.41031         0.94644   184
trainbfg   2       0.37213         0.94882   163
trainoss   16      0.36099         0.95173   1600
traingda   4       6.3467          0.81467   100000
traingdm   4       6.3467          0.81465   100000

           Two Hidden Layers

Training   Nodes   mse (test)      R
function   No.     x [10.sup.-4]   (test)    Epoch

trainlm    12-3    0.26916         0.96873   46
trainrp    2-5     0.39655         0.94623   473
traingda   5-9     0.74826         0.92518   193
traingdx   8-20    0.49918         0.94346   2121
traincgf   7-20    0.59609         0.91539   48
traincgp   18-18   1.5486          0.76494   17
traincgb   7-20    0.58251         0.91807   32
trainscg   3-14    0.36133         0.95046   198
trainbfg   11-19   0.61022         0.9132    30
trainoss   3-6     0.37339         0.95157   12000
traingda   9-17    4.5898          0.73305   30000
traingdm   9-17    4.5875          0.73322   30000

Table 6: Specifications of the Proposed Network

Item                                 Description

No. of nodes in the input layer      4

No. of hidden layers                 2

No. of nodes in the hidden layers    First layer           12

                                     Second layer          3

Type of activation function          First hidden layer    logsig

                                     Second hidden layer   logsig

                                     Output layer          purelin

Training function                    Levenberg-marquardt
                                     (trainlm)

No. Nodes in the output layer        1

Table 7: Input variables importance

Case            msex [10.sup.-4]   R
                (Test)             (Test)

All inputs      0.26916            0.96873
No [d.sub.50]   1.0032             0.86557
No D            3.1935             0.77783
No V            2.4934             0.68938
No [gamma]      0.66757            0.94003
COPYRIGHT 2016 American-Eurasian Network for Scientific Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Khassaf, Saleh I.; Abdulwhab, Ali Q.
Publication:Advances in Natural and Applied Sciences
Article Type:Technical report
Geographic Code:7IRAQ
Date:Jul 1, 2016
Words:3711
Previous Article:Experimental investigation of jet impingement cooling using square ribbed target surface.
Next Article:Intelligent transportation vertical handoff using LTE-A networks.
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters