Printer Friendly

Approximation of the Monte Carlo Sampling Method for Reliability Analysis of Structures.

1. Introduction

The main purpose of regulations and existing approaches in the analysis and design of structures, ranging from buildings to geotechnical structures, is to ensure the safety and proper performance when subject to probable loads. Safety means that structures should not fail under typical loads. Such a failure may not necessarily be in the form of structural collapse and can instead be defined as a certain level of failure under what is known as "performance level" in civil engineering regulations [1, 2].

Assume a load Q (or the stress and strain caused by the load) is to be applied to a structure with a load-bearing capacity of R; if R is greater than Q, the structure is safe, while Q being greater than R renders the structure unsafe, incurring a level of damage which depends on the difference between Q and R. Therefore, a Limit State Function (LSF) can be defined as follows:

LSF = R - Q. (1)

Negative values of LSF imply that the capacity is not adequate to bear the applied load. Therefore, the structure is likely to undergo some sort of failure. On the other hand, positive LSF values indicate adequacy of the structural capacity; that is, the structure is likely to remain safe under the given load. With the LSF value established, failure probability can now be defined as follows:

P (failure) = P (LSF [less than or equal to] 0) = P(R [less than or equal to] Q). (2)

A great deal of research has been done on different methods to calculate this probability [3, 4]. One of the common methods for this purpose is the Monte Carlo sampling method [5-7]. Although this method is seen as effective and is widely used in research, Monte Carlo sampling still suffers a problem which limits its application scope. When the probability is relatively small, the number of samples required by the Monte Carlo method in order to adequately make predictions increases significantly, making the analysis difficult. As such, the aim of this paper is to propose an algorithm to estimate failure probability in cases where less than sufficient number of samples is provided, thereby addressing this issue.

As shown in Monte Carlo sampling algorithm (Figure 1), first, considering the probability distribution for each variable, enough number of random numbers is generated. Then, the LSF is evaluated for the generated random numbers, so that each random number corresponds to a LSF value. Now, by counting the number of less-than-zero data points and dividing the count by the total number of data points, an estimation of failure probability can be arrived at.

However, the problem is that if the probability of failure is so small that it renders the number of samples inadequate, the method loses its efficiency. In other words, as no less-than-zero output is likely to be found, the probability of failure is calculated as zero.

In such cases, there are two logical workarounds. The first possible solution is to increase the number of random numbers and repeat the Monte Carlo analysis. However, depending on the available time and resources, this may not always be possible. The second is to seek an approach through which the probability can be approximated using the available limited number of data points. As described in the following section, this paper focuses on the second solution and proposes an algorithm to address this issue.

2. Problem Description

2.1. The Problem of the Monte Carlo Method. The problem with Monte Carlo sampling is that if the probability of failure is a small value, a large number of samples are needed in order to predict this accurately, causing a sharp increase in required cost and time. Denoting the actual value of failure probability as [P.sub.actual], the approximated value calculated by Monte Carlo sampling is [bar.P]. Soong and Grigoriu [8] showed that the relationship between [P.sub.actual] and [bar.P] can be expressed as follows:

[mathematical expression not reproducible] (3)

where N is the total number of samples and E([bar.P]), [[sigma].sup.2.sub.[bar.P]], and [V.sub.[bar.P]] are the expected value, the variance, and the variation coefficient of estimated probability, respectively. As can be seen, by increasing N, the variance and dispersion of Monte Carlo estimation are reduced, making the results less uncertain.

Now, if the Monte Carlo sampling method is used to calculate a probability of around one percent, with a variation coefficient of 5 percent, the following number of samples is required:

[mathematical expression not reproducible] (4)

As seen above, for a low failure probability, a large number of samples are required, making the method difficult to apply. To address the root cause of this problem, it should be explained that the failure probability is based on the tail of the fitted distribution function, where no samples of the limited set, usually, fall into this region. Therefore, the conventional Monte Carlo sampling is a bad approximation of tail in general, and a small error in the tail leads to a huge error in estimated failure probability. A solution around this is to generate a greater number of simulations in order to obtain enough samples in the tail and, consequently, make a better approximation; however, this would make the method prohibitively expensive. The other solution is to use algorithms that generate more random numbers near the tail. These kinds of algorithms mostly use a technique to change the dispersion of random numbers in order to generate more random numbers in a certain angle or a specific area such as the tail region [9-12]. However, for using these methods, we still need to go through further calculations which increase the model complexities. The purpose of this paper is to provide an approximate but simple method for estimating the small failure probability that could simply be programmed and implemented.

2.2. Quantification of the Problem. To further explain the problem with Monte Carlo sampling method, assume we want to use the Monte Carlo method to estimate the probability of failure by taking 25 samples, where the LSF is calculated as LSF = R - Q, with R being a random variable of log-normal distribution function (mean ([[mu].sub.R]) and standard deviation ([[sigma].sub.R]) of 180 and 20, resp.) and Q is a random variable with Extreme Type I distribution function (mean ([[mu].sub.Q]) and standard deviation ([[sigma].sub.Q]) of 110 and 15, resp.). We solved the problem in this case as follows:

(i) Two groups of 25 random numbers (ranging from zero to one) were generated for R and Q, using uniform distribution function. Uniform random number generators can be found in both statistical software and spreadsheet software [13, 14] or library functions of programming languages [15, 16]. The generated random numbers for R and Q are listed in the second and fifth columns of Table 1, respectively ([u.sub.i1] and [u.sub.i2]).

(ii) For each random number, [u.sub.i1], a corresponding value of R, [r.sub.i], was generated. For this purpose, a set of standard-normal random variables ([z.sub.i]'s) were first generated from [u.sub.i1] using inverse CDF of the standard-normal distribution function, [PHI]:

[z.sub.i] = [[PHI].sup.-1] ([u.sub.i]) . (5)

Then, using the relationship between log-normal distribution and standard-normal distribution as in (6), the corresponding log-normal variables, [r.sub.i]'s, were produced:

[mathematical expression not reproducible] (6)

(iii) For each random number, [u.sub.i2], a corresponding value of Q, [q.sub.i], is then generated. For this purpose, inverse CDF of Extreme Type I distribution function was used as shown in

[q.sub.i] = [F.sup.-1.sub.Q] ([u.sub.i]) = [[mu].sub.Q] - 0.45[[sigma].sub.Q] - [[sigma].sub.Q]/1.282 ln (-ln ([u.sub.i])). (7)

(iv) Subtracting [q.sub.i] from [r.sub.i], N values of LSF were calculated, as reported in the seventh column of Table 1.

(v) Finally, the probability of LSF < 0 was calculated using (8), where n is the number of less-than-zero cases and N is total number of data points:

[bar.P] = n/N. (8)

As can be seen, there were no less-than-zero values, implying a nonreasonable zero value for the estimated probability of Monte Carlo.

The reason for this issue is that the failure probability is too small in this case, so that the random numbers are so low that the Monte Carlo method fails to predict the solution at an acceptable level of accuracy. To investigate further, we developed a code for this case study and repeated the Monte Carlo method with 100, 1000, 5000, 10000, 15000, and 20,000 random numbers. The results are presented in Table 2.

As can be seen, by increasing the number of samples, the accuracy of the predicted probability value increased significantly. However, using this process, especially in cases where the LSF is associated with many random variables or when it is not explicitly available, would be very costly and difficult. This is why an assistant algorithm, to be run alongside the Monte Carlo algorithm, seems necessary to provide an estimation of the failure probability with lower samples numbers.

3. Proposed Method

As explained, due to the low number of available samples, we were not able to use the conventional Monte Carlo method to accurately calculate the failure probability. Furthermore, increasing number of samples was not a viable alternative due to time and resource limitations. Therefore, we propose an approximation algorithm for calculating the failure probability under these conditions.

Our approach towards probability in the proposed algorithm is to use the trend the data displays as it approaches the negative border; this is compared to the Monte Carlo method's counting of the number of less-than-zero samples. For this purpose, the LSF should first be evaluated for the random variables. Then, the LSF values should be sorted in small-to-large ([x.sub.i]) fashion. The associated probability of each value of LSF must then be approximated via [p.sub.i] = i/(N + 1) [8]. Here, it is assumed that these values are likely to follow the standard-normal distribution. Therefore, the inverse standard-normal distribution is adopted to calculate the corresponding standard-normal variable ([z.sub.i]). Then, the values of [z.sub.i] versus [x.sub.i] are plotted on a graph, with a curve fitted to them. The curve fitting algorithms are not discussed in this study because, depending on the type and distribution of data, different algorithms may be appropriate. However, it is recommended to select a curve fitting algorithm that could approximate the data by the lowest possible margin of error. In this paper, the least squares method is used for curve fitting [17, 18]. The fitted curve is used instead of the large number of samples usually required; that is, instead of generating a large number of samples, a specific trend is identified from the curve using the available data. The intersection of the curve with the vertical axis specifies the standard-normal variable corresponding to LSF = 0, which shows the initial failure level. The value of this probability can be easily calculated using the standard-normal distribution function, [PHI]([z.sub.i]), as follows:

[PHI] (z = perception) = Failure Probability. (9)

The proposed algorithm is shown in Figure 2.

To summarize, the previously presented case study was solved using the proposed algorithm, following the step-by-step procedure below which provided an estimation of the probability of failure.

Step 1. As shown in the second column of Table 3, the LSF values were sorted from small to large, so that the smallest and largest values were denoted by [x.sub.1] and [x.sub.N], respectively.

Step 2. For each [x.sub.i], a probability was calculated using the Gumbel distribution [8] in (10), as reported in the third column of Table 3:

[p.sub.i] = i/N + 1. (10)

Step 3. As shown in the fourth column of Table 3, for each [p.sub.i], a corresponding [z.sub.i] = [[PHI].sup.-1] ([p.sub.i]) was calculated.

Step 4. [x.sub.i] was plotted against [z.sub.i] (Figure 3).

Step 5. A graph was fitted to the data (Figure 3 and (10)), with the equation of the fitted curve being as follows:

[z.sub.i] = 0.032923[x.sub.i] - 2.606624. (11)

Step 6. The intersection of fitted curve and vertical axis (intercept) was calculated:

[z.sub.i] (0) = 0.032923 x 0 - 2.606624 = -2.606624. (12)

Step 7. The failure probability was calculated using the CDF of the standard-normal distribution function as follows:

Failure probability = [PHI] (intercept) = [PHI] (-2.606624)

= 0.004572. (13)

As can be seen, the proposed algorithm approximated the failure probability to an acceptable level of accuracy equivalent to the estimation provided by Monte Carlo method using 20000 random numbers.

4. Discussion

To have a better evaluation of proposed method, we need to use it for some other scenarios and check the results. Thus, we define different LSFs and random variables here and try to study the efficiency of proposed method in estimating the exceedance probability. In this regard, three LSFs were defined as follows:

LS[F.sub.1] = R - Q, (14a)

LS[F.sub.2] = 1 - Q/R, (14b)

LS[F.sub.3] = ln (R/Q). (14c)

Two different load-cases were assumed for each LSF where any of them has different random variables R and Q. Thus, a total of 6 different load-cases were defined as shown in Table 4. It should be noted that the required data for the proposed method are the LSF random values. The types of variables used to make the LSF, and their distribution functions, have no direct effects on the proposed algorithm. Therefore, using random variables with different distribution functions is one of the capabilities of proposed method, which is examined in this section.

Each of the six defined load-cases was then analyzed by conventional Monte Carlo method using 25, 1000, 10000, and 20000 samples. The results are shown in columns 2 to 6 of Table 5. In addition, the same load-cases were analyzed by the proposed method, so that we can carefully examine the results. The results of analysis are shown in column 7 of Table 5 and the probability plots are depicted in Figure 4.

As can be seen, in almost all cases, the result of the proposed method to predict the probability of failure by 25 samples is close to the conventional Monte Carlo method by 20000 samples. This issue demonstrates the efficiency of proposed method for predicting the small probability of failure. However, it should be noted that the proposed method is an approximation algorithm and aims to make an approximation of failure probability. Therefore, if increasing of samples numbers is possible, the conventional Monte Carlo method is a better choice. Otherwise, the proposed algorithm can be used to estimate the probability

5. Application of Proposed Method

The proposed method could be used to evaluate the reliability of building or geotechnical structures. To address this capability, a geotechnical case study related to the rock blasting excavation is used in this section. The subject is to estimate the load-bearing capacity of rock mass against the explosion load. After the explosion, the rock medium around the explosion point undergoes a severe shock-load and would be intensely cracked. The size of this crushed zone should be limited to a certain area to minimize the side effect of the explosion. For this purpose, we use the proposed method to estimate the probability of crushed zone exceeding a certain radius. In other words, we try to predict the chance of cracks going beyond a certain radius.

To start the calculation, we first must introduce Esen et al.'s model [19]. Based on a series of in situ tests on concrete and rock samples, Esen et al. [19] developed a formula to predict the crushing zone radius around the blast-hole. Their formula is as follows:

[r.sub.c] = 0.812[r.sub.0] [([P.sup.3.sub.b]/K x [[sigma].sup.2.sub.c]).sup.0.219], (15)

where [r.sub.c] is the crushed zone radius (mm), [r.sub.0] is the blast-holes radius (mm), [P.sub.b] is the blast-hole pressure (Pa), K is the stiffness of rock mass (Pa), and [[sigma].sub.c] is the uniaxial compressive strength of rock. K and [P.sub.b] could be calculated, respectively, by

K = [E.sub.d]/1 + [v.sub.d], (16a)

[P.sub.b] = 1/8[[rho].sub.0][D.sup.2.sub.CJ], (16b)

where [E.sub.d] is the dynamic elastic modulus (Pa), [v.sub.d] is dynamic Poisson's ratio, [[rho].sub.0] is the unexploded explosive density (kg/[m.sup.3]), and [D.sub.CJ] is the detonation velocity (m/s). For the next step, the involved parameters should be defined as random variables. Here, we assumed that these variables have a normal probability distribution function by the characteristics listed in Table 6.

The LSF was set as the difference of crushed zone radius from 400 mm, as shown in

LSF = 400 - [r.sub.c]. (17)

The probability of LSF <0 is identical to the exceeding of crushed zone radius from 400 mm, our study's target in this section. Besides the proposed method, we used the conventional Monte Carlo method, First-Order Reliability Method (FORM), and Second-Order Reliability Method (SORM) to calculate this probability and then compare the results [20-23].

The Monte Carlo sampling method was used by 25, 1000, 5000, 10000, and 20000 sample numbers. The results are shown in Table 7. As seen, the exceedance probability was converged to 3.465 percent after 20000 samples. The proposed method was then used in the next step to analyze the established problem. The probability graph is depicted in Figure 5, and the results are shown in Table 8.

To solve the case study problem by FORM and SORM methods, a computer program, called Risk Tool (RT) [24, 25], developed for reliability analysis and risk assessment of structures, was used. For this means, the random variables were first defined in the "Models" section of the software, according to the values of Table 6, and then the LSF in (17) was entered in the "Functions" part of the software. The required settings for both FORM and SORM analyses were done in the "Methods" tab, and the analyses were finally performed using the RT software. The results are shown in Table 9.

The results of the four methods used are shown in Figure 6. As seen, the proposed algorithm could closely approximate the results of both Monte Carlo and SORM methods. However, there is a relatively high difference between the results of FORM and other methods. It should be addressed that this matter was due to the linear approximation of FORM, which simply does not match our nonlinear LSF in this case study. Therefore, the FORM could not accurately approximate its exceedance probability, compared with the other three methods.

6. Conclusion

In this paper, the fact that the Monte Carlo method requires a large number of samples in order to return acceptable results when it comes to low-probability failures was discussed. After this, a simplified method was presented to estimate the probability of such failures to an equivalent level of accuracy using a smaller volume of samples. A simple case study was also introduced, with the proposed algorithm implemented in a step-by-step fashion. Then, using 6 other load-cases, the efficiency of proposed method was evaluated. Finally, the application of proposed method was shown in a geotechnical project, and the result was compared with other reliability methods.

The following considerations should be noted when using the proposed algorithm:

(1) The proposed method merely gives an estimation of failure probability. Therefore, in cases where increasing the number of samples is possible, original Monte Carlo sampling method is preferred over the proposed algorithm.

(2) The intercept of the fitted curve to data is highly dependent on the curve fitting algorithm used, which was not discussed in this paper. However, to give a general idea, a curve fitting algorithm of the lowest possible error is recommended.

(3) Since the estimated probability depends on the generated random numbers, while repeating the analysis, the results may exhibit slight variations. Consequently, it is recommended that the analysis is repeated several times, with the average value being used as the final result.

(4) As already explained, the problem with Monte Carlo method is that only a few of the random numbers would be generated on the tale of distribution function, which makes it difficult to calculate the failure probability by small random numbers. There are some algorithms proposed by other researchers to improve the performance of Monte Carlo sampling method, such as importance sampling techniques, particular pattern for random number generating [9, 10], or sampling in a certain range of numbers [11, 12]. As these methods do not have any interdiction with our algorithm, they can be used in conjunction with the proposed method.

http://dx.doi.org/10.1155/2016/5726565

Competing Interests

The authors declare that they have no competing interests.

References

[1] FEMA, "Prestandard and commentary for the seismic rehabilitation of buildings," FEMA 356, American Society of Civil Engineers, 2000.

[2] Y. Bozorgnia and V V. Bertero, Eathquake Engineering from Engineering Seismology to Performance-Based Engineering, CRC & Taylor & Francis, New York, NY, USA, 2006.

[3] R. Rackwitz, "Reliability analysis--a review and some perspective," Structural Safety, vol. 23, no. 4, pp. 365-395, 2001.

[4] R. Rebba and S. Mahadevan, "Computational methods for model reliability assessment," Reliability Engineering and System Safety, vol. 93, no. 8, pp. 1197-1207, 2008.

[5] I. M. Sobol, A Primer for the Monte Carlo Method, CRC Press, 1994.

[6] D. P. Kroese, T. Brereton, T. Taimre, and Z. I. Botev, "Why the Monte Carlo method is so important today," Wiley Interdisciplinary Reviews: Computational Statistics, vol. 6, no. 6, pp. 386-392, 2014.

[7] M. H. Kalos and P. A. Whitlock, Monte Carlo Methods, John Wiley & Sons, New York, NY, USA, 2nd edition, 2008.

[8] T. T. Soong and M. Grigoriu, Random Vibration of Mechanical and Structural Systems, Prentice Hall, New York, NY, USA, 1993.

[9] F. Grooteman, "Adaptive radial-based importance sampling method for structural reliability," Structural Safety, vol. 30, no. 6, pp. 533-542, 2008.

[10] F. Grooteman, "An adaptive directional importance sampling method for structural reliability," Probabilistic Engineering Mechanics, vol. 26, no. 2, pp. 134-141, 2011.

[11] M. D. McKay, R. J. Beckman, and W. J. Conover, "A comparison of three methods for selecting values of input variables in the analysis of output from a computer code," Technometrics, vol. 21, no. 2, pp. 239-245, 1979.

[12] N.-K. Nguyen and D. K. Lin, "A note on near-orthogonal Latin hypercubes with good space-filling properties," Journal of Statistical Theory and Practice, vol. 6, no. 3, pp. 492-500, 2012.

[13] E. J. Billo, Excel for Scientists and Engineers: Numerical Methods, John Wiley and Sons, New York, NY, USA, 2007.

[14] T. J. Quirk, Excel 2013 for Engineering Statistics: A Guide to Solving Practical Problems, Springer, New York, NY, USA, 2015.

[15] B. D. Hahn, Essential MATLAB for Scientists and Engineers, Elsevier, Philadelphia, Pa, USA, 2002.

[16] D. Xue and Y. Chen, Solving Applied Mathematical Problems with MATLAB, CRC Press, Taylor and Francis Group, New York, NY, USA, 2008.

[17] S. Arlinghaus, Practical Handbook of Curve Fitting, CRC Press, New York, NY, USA, 1994.

[18] P. G. Guest, Numerical Methods of Curve Fitting, Cambridge University Press, Cambridge, UK, 2012.

[19] S. Esen, I. Onederra, and H. A. Bilgin, "Modelling the size of the crushed zone around a blast hole," International Journal of Rock Mechanics and Mining Sciences, vol. 40, no. 4, pp. 485-495, 2003.

[20] H. O. Madsen, S. Krenk, and N. C. Lind, Methods of Structural Safety, Englewood Cliffs, New Jersey, NJ, USA, 1986.

[21] Y.-G. Zhao and T. Ono, "A general procedure for first/second-order reliability method (FORM/SORM)," Structural Safety, vol. 21, no. 2, pp. 95-112, 1999.

[22] A. D. Kiureghian, "Chapter 14: first-and second-order reliability methods," in Engineering Design Reliability, pp. 302-326, CRC Press, New York, NY, USA, 2005.

[23] S.-K. Choi, R. V Grandhi, and R. A. Canfield, Reliability-Based Structural Design, Springer, New York, NY, USA, 2007.

[24] RT, Department of Civil Engineering, The University of British Columbia, http://www.inrisk.ubc.ca/software/rt/.

[25] M. Mahsuli and T. Haukaas, "Computer program for multi-model reliability and optimization analysis," Journal of Computing in Civil Engineering, vol. 27, no. 1, pp. 87-98, 2013.

Mahdi Shadab Far and Yuan Wang

School of Civil and Transportation Engineering, Hohai University, Nanjing, Jiangsu 210098, China

Correspondence should be addressed to Mahdi Shadab Far; mahdishadabfar@yahoo.com

Received 16 March 2016; Revised 21 April 2016; Accepted 24 April 2016

Academic Editor: David Bigaud

Caption: Figure 1: Algorithm of Monte Carlo method.

Caption: Figure 2: The proposed approximation algorithm.

Caption: Figure 3: Standard-normal variable versus LSF.

Caption: Figure 4: Probability plots for different load-cases.

Caption: Figure 5: The probability graph for crushed zone radius.

Caption: Figure 6: Comparison of different methods to calculate the exceedance probability.
Table 1: Calculation of Monte Carlo method.

i    [u.sub.i1]   [z.sub.i]   [r.sub.i]   [u.sub.i2]   [q.sub.i]

1      0.9671       1.8399     219.3412     0.4574      106.1226
2      0.9374       1.5332     212.0136     0.4076      104.5165
3      0.3290      -0.4428     170.3359     0.2162      98.2607
4      0.1541      -1.0188     159.8069     0.1391      95.3015
5      0.8573       1.0684     201.3745     0.2406      99.1079
6      0.2448      -0.6908     165.7200     0.0792      92.3649
7      0.8095       0.8759     197.1274     0.0051      83.7760
8      0.4357      -0.1618     175.7211     0.7801      119.5504
9      0.5298       0.0748     180.3884     0.1426      95.4491
10     0.9847       2.1627     227.3260     0.8356      123.3391
11     0.5555       0.1396     181.6865     0.2853      100.6010
12     0.2554      -0.6576     166.3306     0.6398      112.6799
13     0.9317       1.4887     210.9716     0.7672      118.7879
14     0.6930       0.5044     189.1786     0.8658      125.9135
15     0.4681      -0.0802     177.3174     0.0560      90.8663
16     0.6443       0.3699     186.3817     0.3524      102.7570
17     0.4761      -0.0600     177.7150     0.8802      127.3382
18     0.9553       1.6988     215.9389     0.6482      113.0289
19     0.9612       1.7648     217.5236     0.0507      90.4690
20     0.2731      -0.6036     167.3288     0.4965      107.4198
21     0.1432      -1.0660     158.9745     0.6972      115.1794
22     0.0403      -1.7475     147.4149     0.6381      112.6138
23     0.6475       0.3786     186.5616     0.2523      99.5045
24     0.9782       2.0174     223.6960     0.7979      120.6637
25     0.0612      -1.5447     150.7632     0.9179      131.9963

i       LSF

1    113.2186
2    107.4971
3     72.0752
4     64.5053
5    102.2666
6     73.3551
7    113.3514
8     56.1707
9     84.9393
10   103.9869
11    81.0855
12    53.6507
13    92.1837
14    63.2651
15    86.4511
16    83.6247
17    50.3768
18   102.9100
19   127.0547
20    59.9090
21    43.7951
22    34.8011
23    87.0570
24   103.0322
25    18.7669

Table 2: Results of Monte Carlo method using different numbers of
samples.

    Number of     Failure
      samples   probability

1       100          0
2      1000        0.006
3      5000        0.0054
4      10000       0.0053
5      15000       0.00493
6      20000       0.00455

Table 3: Related data to the proposed approximation algorithm.

                   Probability,     [z.sub.i] =
                    [p.sub.i] =   [[PHI].sup.-1]
i    LSF outputs     i/(N + 1)      ([p.sub.i])

1      18.7669         0.0385         -1.7688
2      34.8011         0.0769         -1.4261
3      43.7951         0.1154         -1.1984
4      50.3768         0.1538         -1.0201
5      53.6507         0.1923         -0.8694
6      56.1707         0.2308         -0.7363
7      59.9090         0.2692         -0.6151
8      63.2651         0.3077         -0.5024
9      64.5053         0.3462         -0.3957
10     72.0752         0.3846         -0.2934
11     73.3551         0.4231         -0.1940
12     81.0855         0.4615         -0.0966
13     83.6247         0.5000          0.0000
14     84.9393         0.5385          0.0966
15     86.4511         0.5769          0.1940
16     87.0570         0.6154          0.2934
17     92.1837         0.6538          0.3957
18     102.2666        0.6923          0.5024
19     102.9100        0.7308          0.6151
20     103.0322        0.7692          0.7363
21     103.9869        0.8077          0.8694
22     107.4971        0.8462          1.0201
23     113.2186        0.8846          1.1984
24     113.3514        0.9231          1.4261
25     127.0547        0.9615          1.7688

Table 4: Different load-cases.

Case                                   R
number        LSF

                         f(x) *    [mu] **   [sigma] ***

1          g = R - Q    Log-normal    100          10
2          g = R - Q     Uniform       80          5
3         g = 1 - Q/R    Uniform      100          5
4         g = 1 - Q/R     Normal       90          8
5        g = ln (R/Q)   Log-normal    100          7
6        g = ln (R/Q)     Normal      100          8

Case                   Q
number

             f(x)       [mu]   [sigma]

1        Extreme Type I   50       8
2            Normal       50       8
3            Normal       70       10
4          Log-normal     60       7
5           Uniform       80       6
6        Extreme Type I   60       10

* f (x) is the probability distribution function.

** [mu] is the mean.

*** [sigma] is the standard deviation.

Table 5: Comparison of proposed method with conventional Monte Carlo
method for different load-cases.

          Conventional Monte Carlo sampling      Proposed
                                                  method
Case
number  25    1000    5000     10000    20000       25

1        0     0     0.0002   0.0005   0.00055   0.000548
2        0     0     0.0002   0.0008    0.0005   0.000538
3        0   0.004    0.003   0.0029    0.0028   0.002818
4        0     0     0.0024   0.0033   0.00355   0.003654
5        0   0.011   0.0106   0.0102    0.0096   0.009637
6        0   0.006   0.0054    0.005   0.00485   0.004966

Table 6: Characteristics of involved random variables.

                                       Standard
Variable                       Mean    deviation

[[rho].sub.0] (g/[cm.sup.3])   0.95      0.2
[D.sub.CJ] (m/s)               5000      750
[E.sub.d] (GPa)                 70        20
[v.sub.d]                      0.25      0.05
[[sigma].sub.c] (Mpa)           80        30
[r.sub.0] (mm)                  80        30

Table 7: The results of Monte Carlo sampling method.

Samples number  Exceedance probability

25                         0
1000                     0.026
5000                    0.0306
10000                   0.0327
20000                   0.03465

Table 8: The results of proposed method.

                                 Exceedance
Fitted equation      Intercept   probability

y = 0.009x - 1.8303   -1.8303       0.0336

Table 9: The results of FORM and SORM analyses.

Reliability   Reliability    Exceedance
method            index      probability

FORM             1.97907      0.0239043
SORM             1.80431       0.035591
COPYRIGHT 2016 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Far, Mahdi Shadab; Wang, Yuan
Publication:Mathematical Problems in Engineering
Date:Jan 1, 2016
Words:5129
Previous Article:An Efficient Stock Recommendation Model Based on Big Order Net Inflow.
Next Article:An Efficient Algorithm for Learning Dictionary under Coherence Constraint.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters