# Optimization solution of Troesch's and Bratu's problems of ordinary type using novel continuous genetic algorithm.

1. Introduction

Nonlinear phenomena are of fundamental importance in various fields of science and engineering and other disciplines, since most phenomena in our world are essentially nonlinear and are described by nonlinear equations. On the other hand, most models of real-life problems are still very difficult to solve analytically. Therefore, approximate and numerical solutions were introduced.

Numerical methods are methods for solving problems on computers by numerical calculations, often giving a table of numbers and/or graphical representations or figures. Numerical methods tend to emphasize the implementation of algorithms. The aim of numerical methods is therefore to provide systematic methods for solving problems in a numerical form. The process of solving problems generally involves starting from an initial data, using high precision digital computers, following the steps in the algorithms, and finally obtaining the results. Often the numerical data and the methods used are approximate ones.

The goal of this paper is to give an effective optimization approach for solving two certain nonlinear second-order, two-point boundary value problems (BVPs) based on the use of the continuous genetic algorithm (GA) as an alternative to existing methods. The present technique can avoid any sensitivity of the problems and can be applied without any limitation on the number of mesh points. On the other aspect as well, the new technique is accurate, needs less effort to achieve the results, and is developed especially for nonlinear case. More specifically, we consider the following two problems.

Problem 1. Troesch's problem, which is covered by the ordinary differential equation:

y"(x) - [lambda] sinh ([lambda]y(x)) = 0, [lambda] > 0, x [member of] [0,1], (1)

subject to the boundary conditions

y(0) = 0, y(1) = 1. (2)

Problem 2. Bratu's problem, which is covered by the ordinary differential equation:

y" (x) + [mu] exp (y (x)) = 0, [mu] > 0, x [member of] [0, 1], (3)

subject to the boundary conditions

y(0) = 0, y(1) = 0. (4)

The closed form solution of Troesch's problem in terms of the Jacobian elliptic function has been given in  as y(x) = (2/[lambda])[sinh.sup.-1][(1/2)y'(0)sc([lambda]v | 1 - (1/4)[(y'(0)).sup.2])], where y'(0) = 2[square root of (1 - m)] in which the constant m being the solution of the transcendental equation sinh([lambda]/2) = sc([lambda] | m) [square root of (1 - m)] such that sc([lambda] | m) is the Jacobi function defined by sc([lambda] | m) = sin [phi]/ cos [phi], where [phi], [lambda], and m are related by the integral [lambda] = [[integral].sup.[phi].sub.0] (1/[square root of (1 - [msin.sup.2][theta]))]d[theta]. It was noticed that a pole of y(x) occurs at a pole of sc([lambda]v | 1 - (1/4)[(y'(0)).sup.2]). It was also noticed in  that the pole occurs at [x.sub.s] [approximately equal to] (1/2[lambda]) ln(16/(1 - m)), which implies that the singularity lies within the integration range if y (0) > 8e-A. The closed form solution of Bratu's problem has been given in  as y(v) = -2 ln[cosh(0.5(v - 0.5)[theta])/cosh([theta]/4)], where [theta] is the solution of the transcendental equation [theta] = [square root of 2[lambda]] cosh([theta]/4). In fact, this equation has two, one, or no solution when [lambda] < [[lambda].sub.c], [lambda] = [[lambda].sub.c], or [lambda] > [[lambda].sub.c], respectively, in which the critical value [[lambda].sub.c] satisfies the equation 4 = [square root of 2[[lambda].sub.c]] sinh([theta]/4). It was evaluated in  that the critical value of [[lambda].sub.c] equals 3.513830719.

Troesch's problem was first described and solved by Weibel . Troesch's equation comes from the investigation of the confinement of a plasma column under radiation pressure and also in the theory of gas porous electrodes [4-6]. Bratus problem was first studied and solved by Bratu . Bratu's equation is used in a large variety of applications such as the model of thermal reaction process, the Chandrasekhar model of the expansion of the universe, chemical reaction theory, nanotechnology, and radiative heat transfer [8-10].

In general, nonlinear BVPs do not always have solutions which we can obtain using analytical methods. In fact, many of real physical phenomena encountered are almost impossible to solve by this technique. Due to this, some authors have proposed numerical methods to approximate the solutions of Troesch's and Bratu's problems. To mention a few, the Laplace transform decomposition method has been applied to solve Troesch's and Bratu's problems as described in [11, 12]. In  also, the author has provided the decomposition method to further investigate to Troesch's and Bratu's equations. Furthermore, the homotopy perturbation method is carried out in  for solving (1) and (2). The variational iteration method has been used to solve (1) and (2) as presented in . In , the authors have developed the B-spline method to solve Problem 2. Recently, Lie-group shooting method for solving Problem 2 is proposed in .

The organization of the remainder of this paper is as follows. In Section 2, we present a short preface to optimization technique. In Section 3, we formulate the fitness function in order to solve Problems 1 and 2 using continuous GA. Section 4 covers the description of continuous GA in detail. Simulation results are given in Section 5 in order to verify the mathematical results of the proposed method. Statistical and convergence analysis are provided by the results of numerical simulation in Section 6. Finally, in Section 7 a brief summery is presented.

2. Overview of Optimization Technique

Optimization is the process of making something better. An engineer or scientist conjures up a new idea and optimization improves on that idea. Optimization consists in trying variations on an initial concept and using the information gained to improve on the idea. A computer is the perfect tool for optimization as long as the idea or variable influencing the idea can be input in electronic format. Feed the computer some data and out comes the solution.

The terminology "best" solution implies that there is more than one solution and the solutions are not of equal value. The definition of best is relative to the problem at hand, its method of solution, and the tolerances allowed. Thus the optimal solution depends on the person formulating the problem. Education, opinions, bribes, and amount of sleep are factors influencing the definition of best. Optimization plays a crucial role in various disciplines in sciences, industry, engineering, and almost in every aspect of the daily life. Optimization problems are encountered, for example, in communication systems , antenna design , semiconductors manufacturing , aerodynamics , transportation and traffic , nuclear reactor design , medicine , and economics .

Optimization occupies a fundamental position in engineering design and applications, since the classical function of the engineer is to design new, better, or more efficient and less expensive systems as well as to advise planes and procedures for the improved operation of exiting systems . The application of optimization methods to engineering problems requires the selection of the problem decision variables that are adequate to characterize the possible candidate designs or operating conditions of the system, the definition of the objective function on the basis of which candidates will be ranked to determine the best solution, and the definition of a model that describes the manner in which the problem variables are related and the way in which the performance criterion is influenced by the variables. The problem's model normally includes a set of equality constraints, inequality constrains, and some bounds for the variables. In its most general case, the optimization problem involves the determination of the optimal set decision variables of a given objective function in the presence of some constraints. In the context of optimization, the "best" will always mean the candidate system with either the minimum or maximum value of the objective function.

Optimization problems can be divided into two categories depending on whether the solution is continuous or discrete. An optimization problem with discrete solution is known as a conventional optimization problem, while the continuous version is known as a continuous optimization problem. Numerical continuous optimization is the study of how to get the global numerical solution (value or point) for continuous mathematical and physical problems. The importance of numerical continuous optimization has been arisen in lots of research fields, such as science, engineering, and business [27-29].

3. Formulation of the Fitness Function

In this section, Troesch's and Bratu's problems of ordinary types are first formulated as optimization problems by the direct minimization of the overall individual residual error subject to the given constraints boundary conditions and are then solved using continuous GA.

To approximate the solutions of Troesch's and Bratu's problems, we make the stipulation that the mesh points are equally distributed through the interval [0,1]. This condition is ensured by setting [x.sub.i] = ih, i = 0, 1, ..., N, where h = 1/N. Thus, at the interior mesh points, [x.sub.i], i = 1, 2, ..., N - 1, the equation to be approximated is given as

y"([x.sub.i]) = F {y ([x.sub.i])), [x.sub.1] [less than or equal to] [x.sub.i] [less than or equal to] [x.sub.N- 1], (5)

subject to the boundary conditions

y ([x.sub.0]) = 0, y ([x.sub.N]) = [beta], (6)

where F(y([x.sub.i])) = [lambda] sinh([lambda]y([x.sub.i])), [beta] = 1, for Troesch's problem and F(y([x.sub.i])) = -[mu] exp(y([x.sub.i])), [beta] = 0, for Bratu's problem.

The finite difference approximations for derivatives are one of the simplest and of the oldest methods to solve differential equations. This method consists of approximating the differential operator by replacing the derivatives in the equation using difference quotients. In this work, we will employ this technique to approximate the solutions of Troesch's and Bratu's problems numerically using continuous GA. Anyhow, the difference quotients approximation formulas, which closely approximate y"([x.sub.i]), i = 1, 2, ..., N - 1, when h is small using 5 points at the interior mesh points with error of order O([h.sup.3]), are given as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (7)

where [DELTA], [PI], and [nabla] are forward, central, and backward differences, respectively, and are given as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

To complete the formulation substituting the difference approximation formulas of y" ([x.sub.i]), i = 1, 2, ..., N-1, into (5), the discretized form of this equation is obtained. The resulting algebraic equations will be a function of y([x.sub.0]), y([x.sub.1]), ..., and y([x.sub.N]). On the other hand, using formulas in (7), the discrete equations to be optimized will take the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

In order to construct the fitness function, we first define the residual of the general interior node, Res, as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (10)

As the second step in the construction, we define the overall individual residual function, Oir, in [l.sub.2] norm to be a function of the residuals of all interior nodes. This function may be stated as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (11)

For the final step, we define the fitness function, Fit, as

Fit = [delta]/[delta] + Oir; [delta] is a small positive number. (12)

A mapping of the overall individual residual function into a fitness function is required in the algorithm in order to convert the minimization problem of Oir into a maximization problem of Fit. In fact, we do this to facilitate the calculations and planning graphics. The value of individual fitness is improved if a decrease in the value of the Oir is achieved. The optimal solution of the problem, nodal values, will be achieved when Oir approaches zero and Fit approaches unity.

4. Outline of the Genetic Algorithm

The GA is an optimization and search technique based on the principles of genetics and natural selection. The GA allows a population composed of many individuals to evolve under specified selection rules to a state that maximizes the fitness function. In this section, a general review of the GA is presented. After that, a detailed description of the continuous GA is given. As will be shown later, the efficiency and performance of continuous GA depend on several factors, including the design of the continuous GA operators and the settings of the system parameters.

GA, initiated in  as stochastic search technique based on the mechanism of natural selection and natural genetics, has received a great deal of attention regarding its potential as optimization technique for solving discrete optimization problems or other hard optimization problems. GA start with an initial population of individuals generated at random. Each individual in the population represents a potential solution to the problem under consideration. The individuals evolve through successive iterations, called generations. During each generation, each individual in the population is evaluated using some measure of fitness. Then, the population of the next generation is created through genetic operators. The procedure continues until the termination conditions are satisfied. GA is nature-inspired optimization method that can be advantageously used for many optimization problems. GA imitates basic principles of life and applies genetic operators like mutation, crossover, or selection to a sequence of alleles. The sequence of alleles is the equivalent of a chromosome in nature and is constructed by a representation which assigns a string of symbols to every possible solution of the optimization problem.

GA is an evolutionary computation technique, developed for optimization of nonlinear, constrained and unconstrained, nondifferentiable multimodal functions. Here, we introduce some of the advantages of GA ; first, it optimizes with continuous or discrete variables; second, simultaneously searches from a wide sampling of the cost surface; third, it deals with a large number of variables; fourth, it is well suited for parallel computers; fifth, it optimizes variables with extremely complex cost surfaces (they can jump out of a local minimum); sixth, it provides a list of optimum variables, not just a single solution; seventh, it may encode the variables so that the optimization is done with the encoded variables; eighth, it works with numerically generated data, experimental data, or analytical functions.

Remark 3. The term continuous in "continuous GA" is used to emphasize the continuous nature of the optimization problem and the continuity of the resulting solution curves.

Continuous GA was developed as an efficient method for the solution of optimization problems in which the parameters to be optimized are correlated with each other or the smoothness of the solution curve must be achieved . It has been successfully applied in the motion planning of robot manipulators, which is a highly nonlinear, coupled problem [33, 34], in the solution of collision-free path planning problem for robot manipulators , in the numerical solution of second-order, two-point BVPs , in the solution of optimal control problems , in the solution of second-order, two-point singular BVPs , and in the solution of systems of second-order regular BVPs . Their novel development has opened the doors for wide applications of the algorithm in the fields of mathematics and physics. It has been also applied in the solution of fuzzy differential equations . On the other hand, the numerical solvability of other versions of differential equations and other related equations can be found in [41-48] and references therein. The reader is asked to refer to [32-40] in order to know more details about continuous GA, including its history, its justification for use, its applications, its characteristics, and its operators and control parameters.

The use of continuous GA with coupled parameters and/or smooth curves needs some justification [32,36]. First, the discrete initialization version of the initial population means that neighbouring parameters might have opposite extreme values that make the probability of valuable information in this population very limited and correspondingly the fitness will be very low. This problem is overcome by the use of continuous curves that eliminate the possibility of highly oscillating values among the neighbouring parameters and result in a valuable initial population. Second, the traditional crossover operator results in a jump in the value of the parameter in which the crossover point lies while keeping the other parameters the same or exchanged between the two parents. This discontinuity results in a very slow converging process. On the other hand, the continuous GA results in smooth transition in the parameter values during the crossover process. Third, the conventional version of the mutation process changes only the value of the parameter in which the mutation occurs, while it is necessary to make some global mutations which affect a group of neighbouring parameters since either the parameters are coupled with each other or curve should be smooth. To summarize, the operators of the continuous GA are of global nature and applied at the individual level, while the operators of the traditional GA are of local nature and applied at the parameter level. As a result, the operators of the traditional GA result in a step-function-like jump in the parameter values, while those of continuous GA result in smooth transitions.

Continuous GA has several advantages over conventional one when it is applied to problems with coupled parameters and/or smooth solution curves [32, 36] as follows.

(1) There are no encoding/decoding processes in continuous GA. This means that the execution time will be smaller in continuous GA case if both GA versions converge in the same number of generations.

(2) The memory requirements of conventional GA are higher than those of continuous GA because the former uses genotype and phenotype representations of the population's individuals, while the later utilizes only the phenotype data. This makes continuous GA more suitable for problems with larger number of parameters.

(3) In conventional GA, the actual range of the parameter values should lie within the fed range during the evolution process; otherwise, the optimal solution will not be reached. This is because the encoding/decoding processes in conventional GA require the range of values within which the solution should lie. To overcome this problem in conventional GA, a wide range of the parameter values should be provided or some adaptive range value scheme should be used. However, this problem is not encountered in continuous GA due to the fact that there are no encoding/decoding processes.

(4) The conventional GA fails to find the optimal solution in such problems. It might reach a near-optimal solution; as a result, it cannot be used in applications where the optimal solution is required. The deficiency of the conventional GA is emphasized when the dimension of the optimization problem increases or the coupling effect is accentuated.

However, when using GA in optimization problems, one should pay attention to two points: first, whether the parameters to be optimized are correlated with each other or not; second, whether there is some restriction on the smoothness of the resulting solution curve or not. In case of uncorrelated parameters or nonsmooth solution curves, the conventional GA will perform well. On the other hand, if the parameters are correlated with each other or smoothness of the solution curve is a must, then the continuous GA is preferable in this case [32-40].

To summarize the evolution process in continuous GA for solving Troesch's and Bratu's problems, an individual is a candidate solution that consists of 1 curve of N - 1 nodal values. The population of individuals undergoes the selection process, which results in a mating pool among which pairs of individuals are crossed over with probability [p.sub.c]. This process results in an offspring generation where every child undergoes mutation with probability [p.sub.m]. After that, the next generation is produced according to the replacement strategy applied. The complete process is repeated till the convergence criterion is met where the N - 1 parameters of the best individual are the required nodal values. The final goal of discovering the required nodal values is translated into finding the fittest individual in genetic terms. The flowchart of the algorithm is given in Figure 1.

5. Simulation Results

To validate the integrity of continuous GA and to investigate the errors of the modeling and measurements, we carried out two experiments. In fact, simulation results are carried out in order to verify the mathematical results and the theoretical statements for the optimized solutions. The results obtained by the continuous GA are compared with the analytical solution of each problem. The effects of various continuous GA operators and control parameters on the convergence speed of the proposed algorithm are also investigated in this section. The analysis includes the effect of various initialization methods on the convergence speed of the algorithm in addition to an analysis of the effect of selection schemes, the vector norm used, the crossover and mutation probabilities, the population size, and the step size.

Remark 4. The convergence speed of the algorithm, whenever used, means the average number of generations required for convergence.

The continuous GA produces one population after another. This can be done in an infinite loop. When the user specifies a fitness value to be reached, the procedure can be stopped as soon as at least one individual has a higher fitness than the desired one. Often, the user does not exactly know how big the fitness value of an acceptable solution should be. Therefore, one would like to stop the continuous GA when he cannot expect to obtain much better solutions anymore. From the experience with traditional optimization algorithms one might be tempted do observe convergence and to stop the continuous GA when the maximum fitness remains more or less constant over some populations. However, the continuous GA is stopped when one of the following conditions is met:

(1) the fitness of the best individual of the population reaches a value of 0.999999;

(2) the maximum nodal residual of the best individual of the population is less than or equal to 0.00000001;

(3) a maximum number of 3000 generations is reached;

(4) the improvement in the fitness value of the best individual in the population over 500 generations is less than 0.001.

It is to be noted that the first two conditions indicate a successful termination process (optimal solution is found), while the last two conditions point to a partially successful end depending on the fitness of the best individual in the population (near-optimal solution is reached) [32-40].

The continuous GA proposed in this work is used to solve the given Troesch's and Bratus problems. The input data to the algorithm is divided into two parts: the continuous GA related parameters and the problems related parameters. The continuous GA related parameters include the population size, [N.sub.p], the individual crossover probability, [p.sub.c], the individual mutation probability, [p.sub.m], the rank-based ratio, [R.sub.br], the initialization method, the selection scheme used, the replacement method, the immigration threshold value and the corresponding number of generations, and finally the termination criteria. The problem related parameters include the governing Troesch's and Bratu's equations, the independent interval [0, 1], and the number of nodes, N.

Consider Troesch's problems (1) and (2), when [lambda] = 0.5 and [lambda] = 1. Using continuous GA technique, taking [x.sub.i] = i/N, i = 0, 1, 2, ..., N, with the fitness function (12) and the previous termination conditions, the numerical results of approximating y([x.sub.i]), when [lambda] = 0.5 and [lambda] = 1 at N = 10, [N.sub.p] = 500, [p.sub.c] = 0.9, [p.sub.m] = 0.9, [R.sub.br] = 0.1, and [delta] = 1 are tabulated in Tables 1 and 2, respectively.

Consider Bratu's problems (3) and (4), when [mu] = 1 and [mu] = 2. Using continuous GA technique, taking [x.sub.i] = i/N, i = 0, 1, 2, ..., N, with the fitness function (12) and the previous termination conditions, the numerical results of approximating y([x.sub.i]), when [mu] = 1 and [mu] = 2 at N = 10, [N.sub.p] = 500, [p.sub.c] = 0.9, [p.sub.m] = 0.9, [R.sub.br] = 0.1, and [delta] = 1 are tabulated in Tables 3 and 4, respectively.

Numerical comparisons for Troesch's and Bratu's problems are studied next. The conventional numerical methods that are used for comparison of Troesch's problem with continuous GA include the Laplace transform decomposition method , decomposition method , homotopy perturbation method , and variational iteration method , while on the other hand, the conventional numerical methods that are used for comparison of Bratu's problem with continuous GA include the Laplace transform decomposition method , decomposition method , B-spline method , and Lie-group shooting method . Tables 5 and 6 show a comparison between the absolute errors of our method together with other aforementioned methods for Troesch's problem, while Tables 7 and 8 show a comparison for Bratu's problem.

As it is evident from the comparison results, it was found that our method in comparison with the mentioned methods is much better with a view to accuracy and utilization. Also, from the tables, it can be seen that the continuous GA method is consistent with the accurate approximate solutions.

6. Statistical and Convergence Analysis

Throughout this paper, we will try to give the results of Troesch's and Bratu's problems; however, in some cases we will switch between the results obtained for the problems in order not to increase the length of the paper without the loss of generality for the remaining cases and results.

Due to the stochastic nature of continuous GA, twelve different runs were made for every result obtained in this work using a different random number generator seed; results are the average values of these runs. This means that each run of the continuous GA will result in a slight different result from the other runs [32-40]. However, the convergence data of Troeschs and Bratus problems are given in Table 9.

The evolutionary progress plots of the best-fitness individual of Troeschs and Bratus problems are shown in Figures 2 and 3, respectively. It is to be noted from the evolutionary plots that the best-fitness function approaches one very fast in the first stage of computations after that the best-fitness function reaches steady-state values and no further improvement in the fitness value. This means that the approximate of continuous GA converges to the actual solution very fast in the first stage of computations.

The way in which the nodal values evolve is studied next. Figure 4 shows the evolution of the first, [x.sub.1], middle, [x.sub.5], and ninth, [x.sub.9], nodal values for Troesch's problem, while Figure 5 shows the evolution of the same nodal values for Bratu's problem.

It is observed from the evolutionary plots that the convergence process is divided into two stages: the coarse-tuning stage and the fine-tuning stage, where the coarse-tuning stage is the initial stage in which oscillations in the evolutionary plots occur, while the fine-tuning stage is the final stage in which the evolutionary plots reach steady-state values and do not have oscillations by usual inspection. In other words, evolution has initial oscillatory nature for all nodes, in the same problem. As a result, all nodes, in the same problem, reach the near-optimal solution together.

The effect of the different types of initialization methods on the convergence speed of the algorithm is discussed next.

Three initialization methods are investigated in this paper; the first method uses the modified normal Gaussian function (MNGF), the second uses the modified tangent hyperbolic function (MTHF), and the third is the mixed-type initialization method that initializes the first half of the population using the MNGF and the second half of the population using the MTHF [32-40]. Table 10 shows that the use of the initialization method has a minor effect on the convergence speed because usually the effect of the initial population dies after few tens of generations and the convergence speed after that is governed by the selection mechanism, crossover, and mutation operators [32-40]. For Troesch's problem, the MTHF results in the fastest convergence speed while for Bratu's problem, the MNGF results in the fastest convergence speed. However, for a specific problem, the initialization method with the highest convergence speed is the one that provides initial solution curves which are close to the optimal solution of that problem; that is, the optimal solution of Troesch's problem is close to the MTHF and the optimal solution of Bratu's problem is close to the MNGF. However, since the optimal solution of any given problem is not assumed to be known, it is better to have a diverse initial population by the use of the mixed-type initialization method. As a result, the mixed-type initialization method is used as the algorithm default method [32-40]. The reader is asked to refer to [32-40] in order to know more details about the initialization methods used in continuous GA, including their kinds and types and their justification and conditions for use.

The effect of the vector norm used in the fitness evaluation is studied here. Two vector norms are used: [L.sub.1] norm and [L.sub.2] norm. The [L.sub.1] norm is governed by the equation

Oir = [absolute value of Res (1)] + [N-2.summation over (i=2)] [absolute value of Res (i)] + [absolute value of Res (N - 1)], (13)

while [L.sub.2] norm is governed by (11). Figure 6 shows the evolutionary progress plots for the best-of-generation individual for Troesch's problem when [lambda] = 1 and Bratu's problem when [mu] = 2 using [L.sub.1] and [L.sub.2] norms, while Table 11 gives the convergence speed for Troesch's and Bratu's problems for two different cases. Two observations are made in this regard. First, the evolutionary progress plots of both norms show that [L.sub.2] norm has higher fitness values than those of [L.sub.1] norm throughout the evolution process. Second, [L.sub.2] norm converges a little bit faster than [L.sub.1] norm. The key factor behind these observations is the square power appearing in [L.sub.2] norm. Regarding the first observation, it is known that for a given set of nodal residuals with values less than 1, [L.sub.1] norm results in a higher value than [L.sub.2] norm and correspondingly the fitness value using [L.sub.2] norm will be higher than that using [L.sub.1] norm. Regarding the second observation, [L.sub.2] norm tries to select individual solutions, vectors, with distributed nodal residuals among the nodes rather than lumped nodal residuals where one nodal residual is high and the remaining nodal residuals are relatively small. This distributed selection scheme results in closer solutions to the optimal one than the lumped selection scheme. In addition to that, the crossover operator will be more effective in the former case than in the latter one. These two points result in the faster convergence speed in [L.sub.2] norm as compared with [L.sub.1] norm. Furthermore, it is observed that [L.sub.2] norm is less sensitive to variations in the genetic related parameters and problem related parameters. As a result, [L.sub.2] norm is preferred over [L.sub.1] norm and it is used as the algorithm's default norm [32-40].

The particular settings of several continuous GA tuning parameters including the probabilities of applying crossover operator and mutation operator are investigated now. These tuning parameters are typically problem dependent and have to be determined experimentally. They play a non negligible role in the improvement of the efficiency of the algorithm. Figure 7 shows the effect of the crossover probability, [p.sub.c], and the mutation probability, [p.sub.m], on the convergence speed of the algorithm for Troesch's problem when [lambda] = 0.5. It is clear from Figure 7 that when the probabilities values [p.sub.c] and [p.sub.m] are increasing gradually, the average number of generation required for convergence is decreasing as well. Also, it is noted that the best performance of the algorithm is achieved when [p.sub.c] = 0.9 and [p.sub.m] = 0.9. As a result, these values are set as the algorithm default values [32-40].

The influence of the population size on the convergence speed, the average fitness, and the corresponding errors of continuous GA is studied next for Troesch's problem when [lambda] = 1 as shown in Figure 8. Small population sizes suffer from larger number of generations required for convergence and the probability of being trapped in local minima, while large population sizes suffer from larger number of fitness evaluations that means larger execution time. However, it is noted that the improvement in the convergence speed becomes almost negligible after a population size of 700.

Now, the effect of the number of nodes on the convergence speed, the average fitness, and the corresponding errors is explored. Table 12 gives the relevant data for Bratu's problems when [mu] = 1. It is observed that the reduction in the step size results in a reduction in the error and correspondingly an improvement in the accuracy of the obtained solution. This goes in agreement with the known fact about finite difference schemes where more accurate solutions are achieved using a reduction in the step size. On the other hand, the cost to be paid while going in this direction is the rapid increase in the number of generations required for convergence. For instance, while increasing the number of nodes from 5 to 10 to 20, the required number of generations for convergence jumps from almost 600 to 1130 to 2400, that is, 1.89 to 2.21 multiplication factor.

Finally, the effect of the most commonly used selection schemes by GA community of the performance on the continuous GA is investigated. Table 13 represents the effect of selection schemes on the convergence speed, the average fitness, and the corresponding errors for Bratu's problem when [mu] = 2. It is clear from Table 13 that the rank-based selection scheme  has the faster convergence speed. The tournament selection with replacement  and tournament selection without replacement  approaches come in the second place with almost similar convergence speeds. It is obvious that the roulette wheel , stochastic universal , and half biased selection  schemes have slower convergence speed of the rest of the methods. The half biased selection scheme has the slowest convergence speed. The reader is kindly requested to go through [32-40] for more details about the selection scheme used in the algorithm.

7. Summery

This paper has introduced a new optimization technique based on the use of continuous GA where two smooth solution curves are used for representing the required nodal values. The continuous GA was found to be accurate in which the boundary conditions of the problems are satisfied. Simulation results are carried out in order to verify the mathematical results and the theoretical statements for the optimized solutions. The applicability and efficiency of the proposed algorithm for the solution of different cases of Troesch's and Bratu's problems are investigated. On the other aspect as well, the effect of different parameters, including the evolution of nodal values, the initialization method, the selection method, the vector norm used, the crossover and mutation probabilities, the population size, and the step size, is studied.

http://dx.doi.org/10.1155/2014/401696

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to express their thanks to unknown referees for their careful reading and helpful comments.

References

 S. M. Roberts and J. S. Shipman, "On the closed form solution of Troesch's problem," Journal of Computational Physics, vol. 21, no. 3, pp. 291-304, 1976.

 U. M. Ascher, R. M. M. Mattheij, and R. D. Russell, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1995.

 E. S. Weibel, The Plasma in Magnetic Field, Edited by Landshoff R. K. M, Stanford University Press, Stanford, Calif, USA, 1958.

 E. S. Weibel, "On the confinement of a plasma by magnetostatic fields," Physics of Fluids, vol. 2, pp. 52-56, 1959.

 V. S. Markin, A. A. Chernenko, Y. A. Chizmadehev, and Y. G. Chirkov, "Aspects of the theory of gas porous electrodes," in Fuel Cells: Their Electrochemical Kinetics, Consultants Bureau, V. S. Bagotskii and Y. B. Vasilev, Eds., pp. 21-33, New York, NY, USA, 1966.

 D. Gidaspow and B. S. Baker, "A model for discharge of storage batteries," Journal of the Electrochemical Society, vol. 120, pp. 1005-1010, 1973.

 G. Bratu, "Sur certaines equations integrales non lineares," Comptes Rendus, vol. 150, pp. 896-899, 1910.

 R. Buckmire, "Application of a Mickens finite-difference scheme to the cylindrical Bratu-Gelfand problem," Numerical Methods for Partial Differential Equations, vol. 20, no. 3, pp. 327-337, 2004.

 J. S. McGough, "Numerical continuation and the Gelfand problem," Applied Mathematics and Computation, vol. 89, no. 1-3, pp. 225-239, 1998.

 A. S. Mounim and B. M. de Dormale, "From the fitting techniques to accurate schemes for the Liouville-Bratu-Gelfand problem," Numerical Methods for Partial Differential Equations, vol. 22, no. 4, pp. 761-775, 2006.

 S. A. Khuri, "A numerical algorithm for solving Troesch's problem," International Journal of Computer Mathematics, vol. 80, no. 4, pp. 493-498, 2003.

 S. A. Khuri, "A new approach to Bratu's problem," Applied Mathematics and Computation, vol. 147, no. 1, pp. 131-136, 2004.

 E. Deeba, S. A. Khuri, and S. Xie, "An algorithm for solving boundary value problems," Journal of Computational Physics, vol. 159, no. 2, pp. 125-138, 2000.

 X. Feng, L. Mei, and G. He, "An efficient algorithm for solving Troesch's problem," Applied Mathematics and Computation, vol. 189, no. 1, pp. 500-507, 2007

 S. Momani, S. Abuasad, and Z. Odibat, "Variational iteration method for solving nonlinear boundary value problems," Applied Mathematics and Computation, vol. 183, no. 2, pp. 1351-1358, 2006.

 H. Caglar, N. Caglar, M. Ozer, A. Valaristos, and A. N. Anagnostopoulos, "B-spline method for solving Bratu's problem," International Journal of Computer Mathematics, vol. 87, no. 8, pp. 1885-1891, 2010.

 S. Abbasbandy, M. S. Hashemi, and C.-S. Liu, "The Lie-group shooting method for solving the Bratu equation," Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 11, pp. 4238-4249, 2011.

 S. T. Cheng, "Topological optimization of a reliable communication network," IEEE Transactions on Reliability, vol. 47, pp. 225-233, 1998.

 I. S. Misra, A. Raychowdhury, K. K. Mallik, and M. N. Roy, "Design and optimization of a nonplanar multiple array using genetic algorithms for mobile communications," Microwave and Optical Technology Letters, vol. 32, pp. 301-304, 2002.

 J. Burm, "Optimization of high-speed metal semiconductor metal photodetectors," IEEE Photonics Technology Letters, vol. 6, pp. 722-724, 1994.

 A. Vossinis, "Shape optimization of aerodynamics using nonlinear generalized minimal residual algorithm," Optimal Control Applications & Methods, vol. 16, pp. 229-249, 1995.

 R. Fondacci, "Combinatorial issues in air traffic optimization," Transportation Science, vol. 32, pp. 256-267, 1998.

 E. de Klerk, C. Roos, T. Terlaky et al., "Optimization of nuclear reactor reloading patterns," Annals of Operations Research, vol. 69, pp. 65-84, 1997.

 Y. Cherruault, "Global optimization in biology and medicine," Mathematical and Computer Modelling, vol. 20, no. 6, pp. 119-132, 1994.

 J. G. Rowse, "On the solution of optimal tax models and other optimization models in economics," Economics Letters, vol. 18, pp. 217-222, 1985.

 G. V. Reklaitis, A. Ravindran, and K. M. Ragsdell, Engineering Optimization, John Wiley & Sons, New York, NY, USA, 1983, Methods and applications.

 M. Sebag and A. Ducoulombier, "Extending population-based incremental learning to continuous search spaces," in Parallel Problem Solving from Nature, pp. 418-427, 1998.

 P. Larranaga, R. Etxeberria, J. A. Lozano, and J. M. Pena, "Optimization in continuous domains by learning and simulation of Gaussian networks," in Proceedings of the Genetic and Evolutionary Computation Conference Workshop Program, pp. 201-204, Las Vegas, Nev, USA, 2000.

 P. A. N. Bosnian and D. Thierens, "Expanding from discrete to continuous estimation of distribution algorithms: the IDEA," in Parallel Problem Solving from Nature, pp. 767-776, 2000.

 D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.

 R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, John Wiley & Sons, 2004.

 Z. S. Abo-Hammour, Advanced continuous genetic algorithms and their applications in the motion planning of robotic manipulators and the numerical solution of boundary value problems [Ph.D. thesis], Quiad-Azam University, Islamabad, Pakistan, 2002.

 Z. S. Abo-Hammour, "A novel continuous genetic algorithms for the solution of the cartesian path generation problem of robot manipulators," in Robot Manipulators: New Research, J. X. Lui, Ed., pp. 133-190, Nova Science Publishers, New York, NY, USA, 2005.

 Z. S. Abo-Hammour, N. Mirza, S. Mirza, and M. Arif, "Cartesian path planning of robot manipulators using continuous genetic algorithms," Robotics and Autonomous Systems, vol. 41, pp. 179-223, 2002.

 Z. S. Abo-Hammour, O. Alsmadi, S. I. Bataineh, M. A. Al-Omari, and N. Affach, "Continuous genetic algorithms for collision-free cartesian path planning of robot manipulators," International Journal of Advanced Robotic Systems, vol. 8, pp. 14-36, 2011.

 Z. S. Abo-Hammour, M. Yusuf, N. Mirza, S. Mirza, M. Arif, and J. Khurshid, "Numerical solution of second-order, two-point boundary value problems using continuous genetic algorithms," International Journal for Numerical Methods in Engineering, vol. 61, pp. 1219-1242, 2004.

 Z. S. Abo-Hammour, A. Al-Asasfeh, A. Al-Smadi, and O. Alsmadi, "A novel continuous genetic algorithm for the solution of optimal control problems," Optimal Control Applications and Methods, vol. 32, pp. 414-432, 2010.

 O. Abu Arqub, Z. Abo-Hammour, S. Momani, and N. Shawagfeh, "Solving singular two-point boundary value problems using continuous genetic algorithm," Abstract and Applied Analysis, vol. 2012, Article ID 205391, 25 pages, 2012.

 O. Abu Arqub, Z. S. Abo-Hammour, and S. Momani, "Application of continuous genetic algorithm for nonlinear system of second-order boundary value problems," Applied Mathematics and Information Sciences, vol. 8, pp. 235-248, 2014.

 O. Abu Arqub, Numerical solution of fuzzy differential equation using continuous genetic algorithms [Ph.D. thesis], University of Jordan, Amman, Jordan, 2008.

 Z. Abo-Hammour, O. Alsmadi, S. Momani, and O. Abu Arqub, "A genetic algorithm approach for prediction of linear dynamical systems," Mathematical Problems in Engineering, vol. 2013, Article ID 831657, 12 pages, 2012.

 O. Abu Arqub, M. Al-Smadi, and S. Momani, "Application of reproducing kernel method for solving nonlinear Fredholm-Volterra integrodifferential equations," Abstract and Applied Analysis, vol. 2012, Article ID 839836, 16 pages, 2012.

 M. Al-Smadi, O. Abu Arqub, and S. Momani, "A computational method for two-point boundary value problems of fourth-order mixed integrodifferential equations," Mathematical Problems in Engineering, vol. 2013, Article ID 832074, 10 pages, 2013.

 O. Abu-Arqub, A. El-Ajou, S. Momani, and N. Shawagfeh, "Analytical solutions of fuzzy initial value problems by HAM," Applied Mathematics & Information Sciences, vol. 7, no. 5, pp. 1903-1919, 2013.

 O. A. Arqub, M. Al-Smadi, and N. Shawagfeh, "Solving Fredholm integro-differential equations using reproducing kernel Hilbert space method," Applied Mathematics and Computation, vol. 219, no. 17, pp. 8938-8948, 2013.

 O. Abu Arqub, A. El-Ajou, A. S. Bataineh, and I. Hashim, "A representation of the exact solution of generalized Lane-Emden equations using a new analytical method," Abstract and Applied Analysis, vol. 2013, Article ID 378593, 10 pages, 2013.

 O. Abu Arqub, Z. Abo-Hammour, R. Al-Badarneh, and S. Momani, "A reliable analytical method for solving higher-order initial value problems," Discrete Dynamics in Nature and Society, vol. 2013, Article ID 673829, 12 pages, 2013.

 N. Shawagfeh, O. Abu Arqub, and S. Momani, "Analytical solution of nonlinear second-order periodic boundary value problem using reproducing kernel method," Journal of Computational Analysis and Applications, vol. 16, pp. 750-762, 2014.

 J. H. Jiang, J. H. Wang, X. Chu, and R. Q. Yu, "Clustering data using a modified integer genetic algorithm," Analytica Chimica Acta, vol. 354, pp. 263-274, 1997

 E. M. Rudnick, J. H. Patel, G. S. Greenstein, and T. M. Niermann, "A genetic algorithm framework for test generation," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 16, pp. 1034-1044, 1997

 J. R. Vanzandt, "A Genetic Algorithm for Search Route Planning," Tech. Rep. ESD-TR-92-262, United States Air Force, 1992.

Zaer Abo-Hammour, (1) Omar Abu Arqub, (2) Shaher Momani, (3,4) and Nabil Shawagfeh (2)

(1) Department of Mechatronics Engineering, Faculty of Engineering, The University of Jordan, Amman 11942, Jordan

(2) Department of Mathematics, Faculty of Science, Al-Balqa' Applied University, Salt 19117, Jordan

(3) Department of Mathematics, Faculty of Science, The University of Jordan, Amman 11942, Jordan

(4) Nonlinear Analysis and Applied Mathematics (NAAM) Research Group, Faculty of Science, King Abdulaziz- University, Jeddah 21589, Saudi Arabia

Correspondence should be addressed to Omar Abu Arqub; o.abuarqub@bau.edu.jo

Received 23 August 2013; Accepted 16 December 2013; Published 9 February 2014

```
Table 1: Numerical results for Troesch's problem when [lambda] = 0.5.

[x.sub.i]   Exact value    Approximate value

0                0                 0
0.1         0.0951769020     0.0951768506
0.2         0.1906338691     0.1906337234
0.3         0.2866534030     0.2866533395
0.4         0.3835229288     0.3835228897
0.5         0.4815373854     0.4815373121
0.6         0.5810019749     0.5810019054
0.7         0.6822351326     0.6822350995
0.8         0.7855717867     0.7855717450
0.9         0.8913669875     0.8913669298
1                1                 1

[absolute value of         [absolute value of
[x.sub.i]       Err([x.sub.i])]            Res([x.sub.i])]

0                      0                          0
0.1         5.14476384 x [10.sup.-8]   4.33487462 x [10.sup.-7]
0.2         1.45741828 x [10.sup.-7]   9.77131143 x [10.sup.-7]
0.3         6.34609416 x [10.sup.-8]   4.75142679 x [10.sup.-7]
0.4         3.90866823 x [10.sup.-8]   3.31275728 x [10.sup.-7]
0.5         7.33121592 x [10.sup.-8]   6.41709009 x [10.sup.-7]
0.6         6.95247591 x [10.sup.-8]   8.39064329 x [10.sup.-7]
0.7         3.31330484 x [10.sup.-8]   8.13306207 x [10.sup.-7]
0.8         4.16561352 x [10.sup.-8]   6.07573447 x [10.sup.-7]
0.9         5.77267702 x [10.sup.-8]   2.24858269 x [10.sup.-7]
1                      0                          0

Table 2: Numerical results for Troesch's problem when [lambda] = 1.

[x.sub.i]   Exact value    Approximate value

0                0                 0
0.1         0.0817969966     0.0817961006
0.2         0.1645308709     0.1645279896
0.3         0.2491673608     0.2491660068
0.4         0.3367322092     0.3367315761
0.5         0.4283471610     0.4283456762
0.6         0.5252740296     0.5252726497
0.7         0.6289711434     0.6289706787
0.8         0.7411683782     0.7411675558
0.9         0.8639700206     0.8639681690
1                1                 1

[absolute value of         [absolute value of
[x.sub.i]       Err([x.sub.i])]            Res([x.sub.i])]

0                      0                          0
0.1         8.95972167 x [10.sup.-7]   6.02388187 x [10.sup.-5]
0.2         2.88132675 x [10.sup.-6]   6.70066325 x [10.sup.-4]
0.3         1.35402743 x [10.sup.-6]   6.78708955 x [10.sup.-4]
0.4         6.33142603 x [10.sup.-7]   3.82537236 x [10.sup.-5]
0.5         1.48480592 x [10.sup.-6]   7.09360764 x [10.sup.-4]
0.6         1.37988706 x [10.sup.-6]   7.37034856 x [10.sup.-4]
0.7         4.64675846 x [10.sup.-7]   7.35668506 x [10.sup.-5]
0.8         8.22354112 x [10.sup.-7]   7.16391614 x [10.sup.-5]
0.9         1.85163340 x [10.sup.-6]   7.21723770 x [10.sup.-5]
1                      0                          0

Table 3: Numerical results for Bratu's problem when [mu] = 1.

[x.sub.i]   Exact value    Approximate value

0                0                 0
0.1         0.0498467900     0.0498469017
0.2         0.0891899350     0.0891901414
0.3         0.1176090956     0.1176093611
0.4         0.1347902526     0.1347905498
0.5         0.1405392142     0.1405395202
0.6         0.1347902526     0.1347905431
0.7         0.1176090956     0.1176093394
0.8         0.0891899350     0.0891901126
0.9         0.0498467900     0.0498468894
1                0                 0

[absolute value of         [absolute value of
[x.sub.i]       Err([x.sub.i])]            Res([x.sub.i])]

0                      0                          0
0.1         1.11747073 x [10.sup.-7]   1.46008376 x [10.sup.-6]
0.2         2.06424850 x [10.sup.-7]   3.90633337 x [10.sup.-6]
0.3         2.65491661 x [10.sup.-7]   2.42396771 x [10.sup.-6]
0.4         2.97185578 x [10.sup.-7]   1.58574073 x [10.sup.-6]
0.5         3.05996395 x [10.sup.-7]   2.33841807 x [10.sup.-6]
0.6         2.90489351 x [10.sup.-7]   2.69406948 x [10.sup.-6]
0.7         2.43813586 x [10.sup.-7]   1.71196336 x [10.sup.-6]
0.8         1.77595908 x [10.sup.-7]   1.13445094 x [10.sup.-6]
0.9         9.94223941 x [10.sup.-8]   1.95960534 x [10.sup.-6]
1                      0                          0

Table 4: Numerical results for Bratu's problem when [mu] = 2.

[x.sub.i]   Exact value    Approximate value

0                0                 0
0.1         0.1144107440     0.1144123667
0.2         0.2064191156     0.2064220793
0.3         0.2738793116     0.2738831613
0.4         0.3150893646     0.3150937394
0.5         0.3289524214     0.3289570153
0.6         0.3150893646     0.3150937944
0.7         0.2738793116     0.2738831212
0.8         0.2064191156     0.2064219434
0.9         0.1144107440     0.1144123281
1                0                 0

[absolute value of         [absolute value of
[x.sub.i]       Err([x.sub.i])]            Res([x.sub.i])]

0                      0                          0
0.1         1.62268051 x [10.sup.-6]   3.02045290 x [10.sup.-5]
0.2         2.96373587 x [10.sup.-6]   5.97375566 x [10.sup.-5]
0.3         3.84970866 x [10.sup.-6]   3.98775778 x [10.sup.-5]
0.4         4.37480887 x [10.sup.-6]   3.42529172 x [10.sup.-5]
0.5         4.59390598 x [10.sup.-6]   4.43231785 x [10.sup.-5]
0.6         4.42984785 x [10.sup.-6]   4.44801409 x [10.sup.-5]
0.7         3.80957282 x [10.sup.-6]   2.65423819 x [10.sup.-5]
0.8         2.82777504 x [10.sup.-6]   2.22101666 x [10.sup.-5]
0.9         1.58406997 x [10.sup.-6]   3.81609672 x [10.sup.-5]
1                      0                          0

Table 5: Absolute error results for Troesch's problem when
[lambda] = 0.5.

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         7.6745 x [10.sup.-4]   7.6145 x [10.sup.-4]
0.2         1.4949 x [10.sup.-3]   1.4842 x [10.sup.-3]
0.3         2.1410 x [10.sup.-3]   2.1269 x [10.sup.-3]
0.4         2.6619 x [10.sup.-3]   2.6458 x [10.sup.-3]
0.5         3.0098 x [10.sup.-3]   2.9929 x [10.sup.-3]
0.6         3.1313 x [10.sup.-3]   3.1150 x [10.sup.-3]
0.7         2.9660 x [10.sup.-3]   2.9517 x [10.sup.-3]
0.8         2.4448 x [10.sup.-3]   2.4338 x [10.sup.-3]
0.9         1.4872 x [10.sup.-3]   1.4810 x [10.sup.-3]
1                    0             1.1000 x [10.sup.-9]

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         7.6266 x [10.sup.-4]   4.8651 x [10.sup.-3]
0.2         1.4855 x [10.sup.-3]   9.7001 x [10.sup.-3]
0.3         2.1273 x [10.sup.-3]   1.4475 x [10.sup.-2]
0.4         2.6446 x [10.sup.-3]   1.9154 x [10.sup.-2]
0.5         2.9900 x [10.sup.-3]   2.3704 x [10.sup.-2]
0.6         3.1108 x [10.sup.-3]   2.8080 x [10.sup.-2]
0.7         2.9471 x [10.sup.-3]   3.2235 x [10.sup.-2]
0.8         2.4300 x [10.sup.-3]   3.6110 x [10.sup.-2]
0.9         1.4792 x [10.sup.-3]   3.9641 x [10.sup.-2]
1                    0             4.2740 x [10.sup.-2]

[x.sub.i]      Continuous GA

0                    0
0.1         5.1448 x [10.sup.-8]
0.2         1.4574 x [10.sup.-7]
0.3         6.3461 x [10.sup.-8]
0.4         3.9087 x [10.sup.-8]
0.5         7.3312 x [10.sup.-8]
0.6         6.9525 x [10.sup.-8]
0.7         3.3133 x [10.sup.-8]
0.8         4.1656 x [10.sup.-8]
0.9         5.7727 x [10.sup.-8]
1                    0

Table 6: Absolute error results for Troeschs problem when
[lambda] = 1.

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         2.8661 x [10.sup.-3]   2.4518 x [10.sup.-3]
0.2         5.8663 x [10.sup.-3]   4.8998 x [10.sup.-3]
0.3         8.2321 x [10.sup.-3]   7.2471 x [10.sup.-3]
0.4         1.0498 x [10.sup.-2]   9.3535 x [10.sup.-3]
0.5         1.2262 x [10.sup.-2]   1.1055 x [10.sup.-2]
0.6         1.3272 x [10.sup.-2]   1.2092 x [10.sup.-2]
0.7         1.3171 x [10.sup.-2]   1.2113 x [10.sup.-2]
0.8         1.1454 x [10.sup.-2]   1.0620 x [10.sup.-2]
0.9         7.4049 x [10.sup.-3]   6.9387 x [10.sup.-3]
1                    0             1.8020 x [10.sup.-6]

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         2.5847 x [10.sup.-3]   1.8370 x [10.sup.-2]
0.2         5.0899 x [10.sup.-3]   3.6808 x [10.sup.-2]
0.3         7.4256 x [10.sup.-3]   5.5374 x [10.sup.-2]
0.4         9.4785 x [10.sup.-3]   7.4109 x [10.sup.-2]
0.5         1.1095 x [10.sup.-2]   9.3026 x [10.sup.-2]
0.6         1.2056 x [10.sup.-2]   1.1209 x [10.sup.-1]
0.7         1.2039 x [10.sup.-2]   1.3119 x [10.sup.-1]
0.8         1.0565 x [10.sup.-2]   1.5012 x [10.sup.-1]
0.9         6.9135 x [10.sup.-3]   1.6849 x [10.sup.-1]
1                    0             1.8565 x [10.sup.-1]

[x.sub.i]      Continuous GA

0                    0
0.1         8.9597 x [10.sup.-7]
0.2         2.8813 x [10.sup.-6]
0.3         1.3540 x [10.sup.-6]
0.4         6.3314 x [10.sup.-7]
0.5         1.4848 x [10.sup.-6]
0.6         1.3799 x [10.sup.-6]
0.7         4.6468 x [10.sup.-7]
0.8         8.2235 x [10.sup.-7]
0.9         1.8516 x [10.sup.-6]
1                    0

Table 7: Absolute error results for Bratus problem when [mu] = 1.

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         1.9788 x [10.sup.-6]   2.6851 x [10.sup.-3]
0.2         3.9394 x [10.sup.-6]   2.0219 x [10.sup.-3]
0.3         5.8548 x [10.sup.-6]   1.5234 x [10.sup.-4]
0.4         7.7038 x [10.sup.-6]   2.2017 x [10.sup.-3]
0.5         9.4665 x [10.sup.-6]   3.0155 x [10.sup.-3]
0.6         1.1112 x [10.sup.-5]   2.2017 x [10.sup.-3]
0.7         1.2572 x [10.sup.-5]   1.5234 x [10.sup.-4]
0.8         1.3475 x [10.sup.-5]   2.0219 x [10.sup.-3]
0.9         1.1968 x [10.sup.-5]   2.6851 x [10.sup.-3]
1                    0                      0

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         2.9797 x [10.sup.-6]   7.5085 x [10.sup.-7]
0.2         5.4660 x [10.sup.-6]   1.0182 x [10.sup.-6]
0.3         7.3357 x [10.sup.-6]   9.0475 x [10.sup.-7]
0.4         8.4967 x [10.sup.-6]   5.2393 x [10.sup.-7]
0.5         8.8921 x [10.sup.-6]   5.0669 x [10.sup.-9]
0.6         8.4967 x [10.sup.-6]   5.1386 x [10.sup.-7]
0.7         7.3357 x [10.sup.-6]   8.9485 x [10.sup.-7]
0.8         5.4660 x [10.sup.-6]   1.0086 x [10.sup.-6]
0.9         2.9797 x [10.sup.-6]   7.4160 x [10.sup.-7]
1                    0                      0

[x.sub.i]      Continuous GA

0                    0
0.1         1.1175 x [10.sup.-7]
0.2         2.0642 x [10.sup.-7]
0.3         2.6549 x [10.sup.-7]
0.4         2.9719 x [10.sup.-7]
0.5         3.0600 x [10.sup.-7]
0.6         2.9049 x [10.sup.-7]
0.7         2.4381 x [10.sup.-7]
0.8         1.7760 x [10.sup.-7]
0.9         9.9422 x [10.sup.-8]
1                    0

Table 8: Absolute error results for Bratus problem with [mu] = 2.

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         2.1290 x [10.sup.-3]   1.5217 x [10.sup.-2]
0.2         4.2097 x [10.sup.-3]   1.4675 x [10.sup.-2]
0.3         6.1868 x [10.sup.-3]   5.8878 x [10.sup.-3]
0.4         8.0019 x [10.sup.-3]   3.2466 x [10.sup.-3]
0.5         9.5992 x [10.sup.-3]   6.9851 x [10.sup.-3]
0.6         1.0930 x [10.sup.-2]   3.2466 x [10.sup.-3]
0.7         1.1933 x [10.sup.-2]   5.8878 x [10.sup.-3]
0.8         1.2378 x [10.sup.-2]   1.4675 x [10.sup.-2]
0.9         1.0873 x [10.sup.-2]   1.5217 x [10.sup.-2]
1                    0                      0

[x.sub.i]      Method in          Method in 

0                    0                      0
0.1         1.7179 x [10.sup.-5]   4.0341 x [10.sup.-6]
0.2         3.2597 x [10.sup.-5]   5.7027 x [10.sup.-6]
0.3         4.4899 x [10.sup.-5]   5.2212 x [10.sup.-6]
0.4         5.2858 x [10.sup.-5]   3.0749 x [10.sup.-6]
0.5         5.5614 x [10.sup.-5]   1.4554 x [10.sup.-8]
0.6         5.2858 x [10.sup.-5]   3.0464 x [10.sup.-6]
0.7         4.4899 x [10.sup.-5]   5.1946 x [10.sup.-6]
0.8         3.2597 x [10.sup.-5]   5.6787 x [10.sup.-6]
0.9         1.7179 x [10.sup.-5]   4.0135 x [10.sup.-6]
1                    0                      0

[x.sub.i]      Continuous GA

0                    0
0.1         1.6227 x [10.sup.-6]
0.2         2.9637 x [10.sup.-6]
0.3         3.8497 x [10.sup.-6]
0.4         4.3748 x [10.sup.-6]
0.5         4.5939 x [10.sup.-6]
0.6         4.4298 x [10.sup.-6]
0.7         3.8096 x [10.sup.-6]
0.8         2.8278 x [10.sup.-6]
0.9         1.5841 x [10.sup.-6]
1                    0

Table 9: Convergence data of Troesch's and Bratu's problems.

[bar.[absolute
Case             [bar.Gen]   [bar.Fit]         value of Err]]

[lambda] = 0.5     1051      0.99999503   6.38988847 x [10.sup.-8]
[lambda] = 1       1231      0.99992419   1.30753614 x [10.sup.-6]
[mu] = 1           1134      0.99999255   2.22018533 x [10.sup.-7]
[mu] = 2           1313      0.99990899   3.33956728 x [10.sup.-6]

Case             [bar.[absolute value of Res]]

[lambda] = 0.5      5.93727586 x [10.sup.-7]
[lambda] = 1        3.45671315 x [10.sup.-4]
[mu] = 1            2.13495920 x [10.sup.-6]
[mu] = 2            3.47339266 x [10.sup.-5]

Table 10: The convergence speed of the algorithm using different
initialization functions for Troesch's and Bratu's problems.

Troesch's problem   MNGF   mthf   Mixed

[lambda] = 0.5      1110   917    1051
[lambda] = 1        1369   1105   1231

Bratu's problem     MNGF   mthf   Mixed

[mu] = 1            1019   1220   1134
[mu] = 2            1202   1404   1313

Table 11: The effect of the vector norm on the convergence speed of
the algorithm for Troesch's and Bratu's problems.

Troesch's problem   [L.sub.1] norm   [L.sub.2] norm

[lambda] = 0.5           1107             1051
[lambda] = 1             1324             1231

Bratu's problem     [L.sub.1] norm   [L.sub.2] norm

[mu] = 1                 1298             1134
[mu] = 2                 1411             1313

Table 12: The influence of the number of nodes on the convergence
speed, the average fitness, and the corresponding errors of the
algorithm for Bratu's problems when [mu] = 1.

N    [bar.Gen]   [bar.Fit]    [bar.[absolute value of Err]]

5       601      0.99999077      1.00934085 x [10.sup.5]
10     1134      0.99999255      2.22018533 x [10.sup.7]
20     2409      0.99999191      1.16620506 x [10.sup.9]

N    [bar.[absolute value of Res]]

5       8.99016147 x [10.sup.4]
10      2.13495920 x [10.sup.5]
20      9.58206567 x [10.sup.8]

Table 13: The influence of selection schemes on the convergence
speed, the average fitness, and the corresponding errors for Bratu's
problems when [mu] = 2.

Selection method                  [bar.Gen]   [bar.Fit]

Rank-based                          1313      0.99990899
Tournament with replacement         1404      0.99998805
Tournament without replacement      1429      0.99999092
Roulette wheel                      1699      0.99999184
Stochastic universal                1880      0.99998117
Half biased                         2008      0.99998950

Selection method                  [bar.[absolute value of Err]]

Rank-based                           3.33956728 x [10.sup.-6]
Tournament with replacement          1.32758514 x [10.sup.-5]
Tournament without replacement       1.00934084 x [10.sup.-5]
Roulette wheel                       9.06686522 x [10.sup.-5]
Stochastic universal                 2.09275009 x [10.sup.-4]
Half biased                          1.16620506 x [10.sup.-3]

Selection method                  [bar.[absolute value of Res]]

Rank-based                           3.47339266 x [10.sup.-5]
Tournament with replacement          0.10260139 x [10.sup.-4]
Tournament without replacement       0.11360136 x [10.sup.-4]
Roulette wheel                       0.10602135 x [10.sup.-4]
Stochastic universal                 0.11740260 x [10.sup.-3]
Half biased                          0.13856138 x [10.sup.-2]
```
Title Annotation: Printer friendly Cite/link Email Feedback Research Article Abo-Hammour, Zaer; Arqub, Omar Abu; Momani, Shaher; Shawagfeh, Nabil Discrete Dynamics in Nature and Society Report Jan 1, 2014 10373 Intuitionistic fuzzy planar graphs. Dynamical analysis of a Plateau Pika with cross-diffusion under contraception control. Continuous functions Differential equations Functions, Continuous Genetic algorithms Mathematical research