# A memetic differential evolution algorithm based on dynamic preference for constrained optimization problems.

1. Introduction

In many science and engineering fields, there often occurs a kind of optimization problems which are subject to different types of constraints, and they are called constrained optimization problems (COPs). Due to the presence of constraints, COPs are well known as a challenging task  in optimization fields. Evolutionary algorithms (EAs), inspired by nature, are a class of stochastic and population-based optimization techniques, which have been widely applied to solve global optimization problems. In the last decades, researchers applied EAs for COP and proposed a large number of constrained optimization EAs (COEAs) [2-4].

As is well known, the goal of COEAs is to find the optimal solution which satisfies all constraints. So constraint satisfaction is the primary requirement and the technique of constraints handling will greatly affect the performance of COEAs. The most common methods to handle constraints are penalty function based methods [4, 5], which transform a COP into an unconstrained one by adding a penalty term into the original objective function in order to penalize the infeasible solutions. The main drawback of the methods based on penalty function is to preset an appropriate penalty factor, which greatly influences the performance of these methods. However, as indicated in , deciding an optimal penalty factor is a very difficult problem. To avoid the use of penalty factor, Powell and Skolnick  suggested a fitness function, which maps feasible solutions into the interval (-[infinity], 1) and infeasible solutions into the interval (1,+ra). The method considered that the feasible solutions are always superior to infeasible ones. Inspired by Powell and Skolnick , Deb  proposed three feasibility rules for binary tournaments selection without introducing new parameters. The binary tournament selection based on the three feasibility rules is as follows: (1) any feasible solution is preferred to any infeasible one; (2) between two feasible solutions, the one with better objective value is preferred; (3) between two infeasible solutions, the one with smaller constraint violation is preferred. However, the feasibility rules always prefer feasible solutions to infeasible ones, which makes the promising infeasible solutions which carry more important information than its feasible counterparts have little opportunity to go into next population. Therefore, the diversity of the population degrades, which makes the algorithm prone to trapping into local optima.

In recent years, some researchers suggested transforming a COP into a biobjective optimization problem based on the penalty function [7-9]. For the converted biobjective optimization problem, if the Pareto dominance is used as the unique strategy for individuals comparison, then the objective function and constraint violation function are seen of the same significance. This will result in the situation that some solutions being far away from the feasible region but with small objective function value will be retained in the evolution process. However, these solutions have little help in searching the optimal solution of the original problem, especially for problems whose optimal solutions are inside the feasible region.

From the above analysis, it can be concluded that not only the methods which consistently prefer feasible solutions to infeasible ones are arguable, but also the methods which treat the objective function as equally important as the constraint satisfaction are ineffective. Instead, the methods that can balance the objective function and the constraint satisfaction will be effective. Nevertheless, how to balance both objectives during the evolution needs to be further considered carefully.

Runarsson and Yao  proposed the stochastic ranking (SR) method which used a comparison probability [P.sub.f] to balance the objective function and constraint violation. The SR based comparison prefers some good infeasible solutions with the probability 1-[P.sub.f], which not only makes the promising infeasible solutions have the opportunity to survive into the next population, but also enhances the diversity of the population. Wang et al.  proposed an adaptive tradeoff model (ATM) with evolution strategy to solve COP, which can adaptively adjust the proportion of infeasible solutions survived into the next generation. Takahama and Sakai  transformed a COP into an unconstrained numerical optimization problem with novel constraint-handling techniques, so called e-constrained method. The method relaxes the limit to consider a solution as feasible, based on its sum of constraint violation, with the aim of using its objective function value as a comparison criterion whereas the extent of the relaxation is dynamic in the evolution.

As is indicated in , besides the constraints handling technique, the performance of COEAs depends on another crucial factor: the search mechanism of EAs. The current popular search algorithms involve evolution strategy (ES) [2, 9, 12], particle swarm optimization (PSO) , differential evolution (DE) [14-17], electromagnetism-like mechanism algorithm , and so forth. Among these search algorithms, DE is a more recent and simple search engine. In competition on constrained real-parameter optimization at IEEE CEC 2006 Special session, among the top five best algorithms, there are three algorithms with DE as a search engine, which shows the efficiency of DE and much attention has been attracted by DE in various practical cases. Wang and Cai  combined multiobjective optimization with DE to solve COP, in which DE was used as a search mechanism, and Pareto dominance was used to update the evolution population. Zhang et al.  proposed a dynamic stochastic selection scheme based on SR  and combined it with the multimember DE . Jia et al.  presented an improved [mu] + [lambda]) DE for COPs, in which a multimember DE  is also used as the search engine, and an improved ATM  version is applied as the comparison criterion between individuals. In recent years, the combination of different DE variants and the adaptive parameters setting is popular in the algorithm designing [1, 23, 24]. In , the modified basic mutation strategy is combined with a new directed mutation scheme, which is based on the weighting difference vector between the best and the worst individual at a particular generation. Moreover, the scaling factor and crossover rate are also dynamically set to balance the global search and local search. Huang et al.  also proposed a self-adaptive DE for COPs. In the methods, the mutation strategies and the parameters settings are self-adapted during the evolution. Elsayed et al.  proposed an evolutionary framework that utilizes existing knowledge to make logical changes for better performance. In the proposed algorithm, 8 mutation schemes are tested on a suit of benchmark instances, and the performance of each scheme is analyzed. Then a self-adaptive multistrategy differential evolution algorithm, using the best several variants, is exhibited. The proposed algorithm divides the population into a number of subpopulations, where each subpopulation evolves with its own mutation and crossover, and also the population size, scaling factor, and crossover rate are adaptive to the search progresses. Furthermore, the SQP based local search is used to speed up the convergence of the algorithm. Moreover, DE or its mutation operator is also combined with other algorithms to improve the performance of methods [11, 25]. In , Runarsson and Yao improved SR  by combining ES with the mutation operator adopted in DE, which results in great promotion in the quality of solutions. In , the authors proposed biogeography-based optimization (BBO) approaches for COP, in which one version is combined with DE mutation operators, instead of the conventional BBO-based operators. The experiment results show that the version combined with DE mutation operators outperforms the other variants of BBO, which confirms the efficiency of DE operators.

In this paper, we proposed a memetic differential evolution algorithm with dynamic preference (MDEDP) to solve COP. In the proposed method, DE is used as the global search engine, and the simplex crossover (SPX) is incorporated to encourage the local search in the promising region in the search space. In the constraint-handling technique, COP is first transformed into a biobjective optimization problem with original objective function and constraint violation function as two objectives, in which the constraint violation function is constructed by constraints. Then, for the converted biobjective problem, an achievement sacralizing function (ASF) based fitness function is presented to balance the objective function and constraints violation in the evolution. The reference point and the weighting vector used in constituting the ASF are dynamically adopted to achieve a good balance between objective and constraint violation function.

The rest of this paper is organized as follows. COP transformation and some related concepts are briefly introduced in Section 2. In Section 3, DE algorithm and simplex crossover operator are briefly presented. The novel memetic DE with dynamic preference for COP is proposed in Section 4. Section 5 gives experimental results and comparisons with other state-of-the-art algorithms for 22 standard test problems. Finally, we summarize the paper in Section 6.

2. COPs and Some Related Concepts

2.1. Constrained Optimization Problem. In general, a constrained optimization problem can be stated as follows:

min f (x) s.t. [g.sub.i] (x) [less than or equal to] 0, i = 1, 2, ..., p [h.sub.j] (x) = 0, j = p + 1, ..., q, (1)

where x = ([x.sub.1], [x.sub.2], ..., [x.sub.n]) [member of] S is decision variable and S is an n-dimension rectangular search space in [R.sup.n] defined by S = {x | [l.sub.i] [less than or equal to] [x.sub.i] [less than or equal to] [u.sub.i], i = 1, 2, ..., n}. f : S [right arrow] R is an objective function, [g.sub.i](x) [less than or equal to] 0 (i = 1, 2, ..., p) are inequality constraints, and [h.sub.j](x) = 0 (j = p + 1, ..., q) are equality constraints. The feasible region [OMEGA] [subset] S is defined as [OMEGA] = {x [member of] S | [g.sub.j](x) [less than or equal to] 0, j = 1, ..., p, [h.sub.j](x) = 0, j = p + 1, ..., q}. Solutions in [OMEGA] are feasible solutions.

2.2. Problem Transformation and Related Concepts. Penalty function methods are the common constraints handling technique for solving COPs; however, the methods are sensitive to the penalty factor, and the tuning of penalty parameters is very difficult . Reformulating a constrained optimization problem as a biobjective problem is a recently effective constraints handling method [7-9]. In general, the degree of constraint violation of x on the ith constraint is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2)

where [delta] is a small positive tolerance value for equality constraints. Then G(x) = [[summation].sup.q.sub.i=1] [G.sub.i](x) reflects the degree of violation of all constraints at x and also denotes the distance of x to a feasible region. Obviously, G(x) [greater than or equal to] 0.

Based on the constraint violation function G(x), COP is converted into a biobjective optimization problem, which minimizes the original objective function f(x) and the constraint violation function G(x) simultaneously. The converted biobjective problem is as follows:

min (f(x), G(x)) s.t. x [member of] S. (3)

For simplicity, the two objectives of (3) are denoted by [f.sub.1](x) = f(x), [f.sub.2](x) = G(x).

Though COP is converted into a biobjective optimization problem (3), they are different essentially (see Figure 1). Biobjective problem (3) intends to find a set of representation solutions uniformly distributed on Pareto front (PF), while COP (1) is to find a solution which satisfies all constraints (G(x) = 0) and also minimizes f(x), that is, to find the point in the intersection of PF and the feasible region.

3. Differential Evolution and Simplex Crossover Operator

3.1. Differential Evolution . Differential evolution, proposed by Storn and Price , is a parallel direction search method and also follows the general procedure of an EA. For its simplicity and efficiency, DE has been widely employed to solve constraints optimization problem [1, 14-16, 19-21]. In the DE process, initial population P with NP individuals is produced randomly in decision space. For each target vector [x.sub.i] [member of] P (i = 1, 2, ..., NP) at each generation, DE then employs the three operations below in turn (there are more than 10 variants of DE schemes in the related references; in this paper, the most often used DE scheme "DE/rand/1/bin" is employed in the search process and its details are as follows: where "rand" denotes that the vector to be mutated is randomly selected in the current population, "1" specifies the number of difference vectors, and "bin" denotes the binomial crossover scheme).

Mutation. A trial vector [v.sub.i] = ([v.sub.i,1], [v.sub.i,2], ..., [v.sub.i,n] for each target vector [x.sub.i] = ([x.sub.i,1], [x.sub.i,2], ..., [x.sub.i,n]) is generated based on the current parent population via the "rand/1" mutation strategy:

[v.sub.i] = [x.sub.r1] + F x ([x.sub.r2] - [x.sub.r3]), (4)

where r1, r2, and r3 are mutually different integers randomly selected from {1, 2, ..., NP} {i} and F [member of] [0, 2] is a scaling factor which controls the amplification of the differential variation ([x.sub.r2] - [x.sub.r3]). Crossover. To increase the diversity, the offspring vector [u.sub.i] = ([u.sub.i,1], [u.sub.i,2], ..., [u.sub.i,n]) is then generated by the binomial crossover between target vector and trial vector, in which

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (5)

for j = 1, 2, ..., n, where Cr [member of] [0, 1] is crossover probability and [j.sub.rand] [member of] {1, 2, ..., n} is a randomly chosen index, which ensures that offspring [u.sub.i] is different from its parent [x.sub.i] in at least one component.

Selection. The offspring [u.sub.i] is compared to its parent [x.sub.i] using greedy criterion, and the one with smaller fitness will be selected into the next population.

3.2. Simplex Crossover Operator (SPX) . Simplex crossover (SPX) is one of the most common used recombination operators in EAs, which generates offspring based on uniform distribution. The search region of SPX is adaptively adjusted with the evolution, which ensures that SPX has the good ability of exploration in the earlier stages and the good ability of exploitation in the later stages of evolution. Moreover, SPX is very simple and easy to realize.

In [R.sup.n], n + 1 mutually independent parent vectors [x.sub.i] (i = 1, 2, ..., n +1) form a simplex. Offspring is generated through the following two steps: (1) expand the original simplex along each direction [x.sub.i] - o (i = 1, 2, ..., n + 1) by (1 + [epsilon]) ([epsilon] > 0) times to form a new simplex, where o is the center of original simplex, that is, o = [[summation].sup.n+1.sub.i=1] [x.sub.i]/(n + 1) and the vertexes of the new simplex are [y.sub.i] = (1 + [epsilon])([x.sub.i] - o), (i = 1, 2, ..., n + 1) and (2) choose a point uniformly from the new simplex as the offspring, that is, offspring z = [k.sub.1] [y.sub.1] + [k.sub.2] [y.sub.2] + ... + [k.sub.n+1] [y.sub.n+1] + o, where [k.sub.i] (i = 1, 2, ..., n + 1) is a random number in [0, 1] and satisfies the condition [k.sub.1] + [k.sub.2] + ... + [k.sub.n+1] = 1.

In general, the SPX is specified as SPX-n-m-[epsilon], where n is the dimension of search space, m is the number of parents to constitute the simplex which is selected in [2, n + 1], and [epsilon] is a control parameter that defines the expanding rate. In the above procedure of producing offspring, the number of parents m is set to n + 1, and the crossover scheme SPX is denoted by SPX-n-(n+ 1)-[epsilon].

4. Memetic DE Based on Dynamic Preference for Solving COPs

4.1. Novel Fitness Function Based on Dynamic Preference. Pareto dominance is the most often used rule in multiobjective optimization to sort individuals. However, if two individuals are not dominated by each other, the better one of them cannot be determined. To solve the converted biobjective problem (3), we intend to find the solution which not only satisfies the constraints (G(x) = 0), but also optimizes the original objective function f(x); therefore, problem (3) is substantially a biobjective problem with preference. Among methods in solving multiobjective problem, weighted metrics method  scalarizes the multiobjectives into a single objective by the weighting distance from individuals to a reference point. One of the single objective functions obtained by scalarizing is called achievement scalarizing function (ASF) , which implies the preference to different objectives via different weighting vectors and reference points. For general biobjective optimization problems, ASF is defined as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

where [bar.z] = ([[bar.z].sub.1], [[bar.z].sub.2]) is a reference point (it may be inside the objective space or not), which determines the preferred region on Pareto front (curve between A and B on PF), [omega] = ([[omega].sub.1], [[omega].sub.2]), satisfying [[omega].sub.1], [[omega].sub.2] [greater than or equal to] 0 and [[omega].sub.1] + [[omega].sub.2] = 1, is a weighting vector, which points to the certain Pareto optimal point z' in the preferred region (see Figure 2). From Figure 2, F(x) is the [omega]-weighted distance from individual x to the reference point [bar.z], and the value [[omega].sub.1], [[omega].sub.2] realizes the extent of preference to the two objectives [f.sub.1] and [f.sub.2]. [[omega].sub.1] > [[omega].sub.2] means that ASF (4) prefers the Pareto optimal solution with small [f.sub.1] between A and B, and conversely, it prefers the solution with small [f.sub.2]. The advantage of ASF is that arbitrary (weakly) Pareto optimal solution can be obtained by moving the reference point [bar.z] only. Moreover, the different (weakly) Pareto optimal solutions between A and B can be found by various weighting vectors .

For the converted biobjective problem, the evolution procedure usually experiences three stages. In the first stage, there is no feasible solution in the population; therefore, the individuals with small constraint violations should be selected in priority to help the population be close to the feasible region. With the population evolution, some individuals go into the feasible region. In this stage, the feasible individuals are preferred, and some nondominated infeasible individuals with small constraints violation are also maintained to increase the diversity. In the third stage, there are only feasible individuals in population, and the feasible individuals with small objective function value are preferred. According to the characteristics of ASF and the above analysis, in different stages of evolution, the proper reference point [bar.z] and weighting vector [omega] should be chosen adaptively to construct the fitness function F(x) and realize the preference to different objectives.

In the first stage, there are only infeasible solutions in the population; that is, [rho] = 0 (where [rho] is the proportion of feasible solutions in population). Therefore, constraint satisfaction is more important than objective minimization in the individuals comparison. However, if constraint satisfaction is the only condition considered, dominated individuals with small constraint violations and large objective function value will also access into the next population with priority, and these individuals have little effect on finding the optimal solution. On the other hand, the individuals with smaller objective function value and slightly bigger constraint violations would be neglected. Furthermore, some algorithms adopted Pareto ranking which put the same importance on all objectives and assigned the same best fitness value to all the Pareto optimal solutions in population first. But, for some problems (e.g., problems whose optimal solution is inside the feasible region), infeasible individuals which are far away from the boundaries of the feasible region have little effect on searching optimal solution. Thus, in this stage, the small constraint violation should be guaranteed in priority, and meanwhile the objective function value should be guaranteed not too large. In the absence of feasible solution in the population, the nondominated infeasible solution with the smallest constraint violation (it is also called the best infeasible solution) is the best solution, and it is chosen as the reference point [bar.z]. Moreover, in this stage, the constraint satisfaction is a key issue; thus, the weighting vector can be set to [omega] = (0.1, 0.9). This choice of the reference point and the weighting vector demonstrates that the smaller the weighting distance from an individual to the reference point, the better the individual. Figure 3(a) illustrates the fitness value of individuals in population.

In the second stage, there are both feasible and infeasible solutions in the population; that is, 0 < [rho] < 1. In this stage, the evolution should adjust preference according to the proportion of feasibility in current population. If the proportion is low, it is possible that the evolution is in the early time of the search, or the feasible region is very small compared to the search space and it is difficult to obtain feasible individuals in the evolution. Therefore, the feasible individuals are preferred. If the proportion is high, some nondominated infeasible individuals with small constraint violations are more useful than some feasible individuals with large objective function value, especially for the problem whose optimal solution is located exactly on or nearby the boundaries of the feasible region. From the above analysis, the reference point [bar.z] is determined as the best feasible individual, and the weighting vector is determined according to the proportion [rho]. If the proportion [rho] is small, the evolution should prefer the individuals with small value of G(x) (i.e., individuals which are feasible or with small constraint violations). Conversely, if [rho] is big, the evolution should prefer the individuals with small value of f(x). Because the original COP is to find the optimal solution which satisfies all constraints, then the constraint satisfaction is primary, so the preference to the objective function cannot be too large, and the maximum of the preference weighting value to the objective function is set to 0.5; that is, [[omega].sup.max].sub.1] = 0.5. The weighting vector can be determined as [[omega].sub.1] = min{[rho], [[omega].sup.max.sub.1]}, and [[omega].sub.2] = 1 - [[omega].sub.1]. Figures 3(b) and 3(c) illustrate the preference in this stage.

In the third stage, there are only feasible solutions in the population; that is, [rho] = 1. It is obvious that the comparison of individuals is based on their objective function values. Actually, the criterion is the special case of formula (4), in which [bar.z] is the best feasible solution and weighting vector [omega] = (1, 0).

In the comparison among individuals, F(x) is used as the fitness function. The smaller the value of F(x), the better the individual x. Fitness function F(x) is the weighted distance from individual to the reference point which is the best in the population. Obviously, the fitness F(x) is nonnegative and has the minimal value 0 at the best individual; therefore, the elitist strategy is taken to prevent the population from degenerating. To avoid the biases in F(x) caused by the magnitude of [f.sub.1](x) and [f.sub.2](x), both of them are normalized firstly. For simplicity, we still use [f.sub.1](x) and [f.sub.2](x) to denote the normalized objective functions and

[f.sub.1](x) = [[[f(x) - [f.sub.min]]/[[f.sub.max] - [f.sub.min]]]

[f.sub.2](x) = [[[G(x) - [G.sub.min]]/[[G.sub.max] - [G.sub.min]]], (7)

where [f.sub.max], [f.sub.min], [G.sub.max], and [G.sub.min] are the maximum and minimum values of f(x) and G(x), respectively. Besides the minimum of G(x) is known as 0, and the other three values are unknown. Therefore, [f.sub.max], [f.sub.min], and [G.sub.max] are updated by the maximum and minimum values of f(x) and G(x) in the population. Moreover, in the normalization, we do not use the minimum value of G(x) in the population because if there is no feasible solution, the infeasible solution with smallest constraint violation will be normalized to feasible one.

4.2. Local Search Based on SPX. In this paper, DE/rand/1/bin is used as the global search engine, which has good ability of exploration in the early phases, and can explore the search space fully. However, its exploitation ability is weak in the early stage, which results in slow convergence. To balance the exploration and exploitation in optimizing process, local search (LS) based on simplex crossover (SPX) is employed. For many COPs in practice, especially those with equality constraints, the optimal solution is situated on the boundary of the feasible region or nearby. Thus searching from both sides of the boundary of the feasible region is effective. From Figure 1, one can conclude that the solutions in the lower-left part of the objective space, which are feasible solutions with small objective values or nondominated infeasible ones with small constraint violation, are promising. Thus, the neighborhood embraced by the promising solutions and nearby deserves more computing resources. Therefore, the LS will be efficient when focusing on the promising region. Besides the neighborhood the LS is concentrating on, other issues influencing the LS are the solutions on which the LS engine will work and the number of solutions used to participate the LS. From the conclusion in , SPX with a large number of parents, m, has sampling biases that reflect biases in the population distribution too much. So a medium number of parents are a good choice to overcome the oversampling biases. Furthermore,  concluded that SPX works well with m = 3 on a low dimensional function and m = 4 on high dimensional functions. Based on the conclusion, in the proposed SPX based local search, the number of parents, m, is set to 3. Furthermore, the three parents selected to perform SPX in LS are demonstrated in Figure 4.

In the first stage, there is no feasible solution in the population; that is, [rho] = 0. The nondominated solutions with small constraint violation are located in the lower-left part in the objective space; therefore, the top three nondominated solutions with small constraint violation are selected as parents [x.sub.1], [x.sub.2], and [x.sub.3] to perform SPX (Figure 4(a)).

In the second stage, there are feasible and infeasible solutions simultaneously in the population; that is, 0 < [rho] < 1. If there is only one feasible solution, it is the first selected parent [x.sub.1]. And then, the solutions which are nondominated with it are checked. If there is no nondominated solution, then the top two solutions with small constraint violation are selected as the second parent and the third parent [x.sub.2] and [x.sub.3] (Figure 4(b)); otherwise, the nondominated solution with the smallest constraint violation is selected as the second parent [x.sub.2], and then the infeasible solution with smallest constraint violation in the population excluding [x.sub.2] is selected as the third parent [x.sub.3] (Figure 4(c)). If there is more than one feasible solution, the top two feasible solutions with small objective value are selected as the parents [x.sub.1] and [x.sub.2]. Moreover, if there is no solution which is nondominated with the best feasible solution, then the infeasible solution with the smallest constraint violation is selected as the third parent [x.sub.3] (Figure 4(d)); otherwise, the nondominated one with smallest constraint violation is the third parent [x.sub.3] (Figure 4(e)).

In the last stage, all solutions in the population are feasible, that is, [rho] = 1. It is obvious that the top three solutions with small objective value are selected as parents [x.sub.1], [x.sub.2], and [x.sub.3].

In the above LS procedure, the best feasible and infeasible solutions are selected to form the simplex which crosses the feasible and infeasible region in the search space. Furthermore, the neighborhood that the LS engine works on is dynamically adjusted with the evolution and the search will focus on the promising region in search space, and the strategy encourages exploitation adaptively from both sides of the boundary of the feasible region.

4.3. Memetic DE Based on Dynamic Preference for COPs. For biobjective problem (3) with preference, the proposed EA, denoted by MDEDP, is described as follows.

Algorithm 1. Consider the following.

Step 1 (initialization). Randomly generate the initial population P(0) with size NP in the search space S, and set k = 0.

Step 2 (global search). Apply DE/rand/1/bin as the global search engine to evolve the population P(k), and the offspring set is denoted by [O.sub.1].

Step 3 (local search). Perform the local search on the promising region according to the details in Section 3.2. The LS based on SPX generates [mu] offspring, which are randomly selected from the enlarged simplex. The offspring constitute set [O.sub.2].

Step 4 (selection). Select the next population from P(k) [union] [O.sub.1] [union] [O.sub.2] according to the fitness function value F(x) in Section 3.1.

Step 5 (stopping criterion). If stopping criterion is met (usually the maximum number of function evaluations is used), stop; else let k = k + 1, and go to Step 2.

5. Simulation Results

5.1. Test Functions and Parameters Setting. In this section, we apply the proposed MDEDP to 22 standard test functions from [1, 2, 24]. The parameters used in simulations are given below. Population size, NP = 200, the scale factor P in DE mutation, and the probability of crossover CR are uniformly random number in [0.8, 0.9] and [0.9, 0.95], respectively. In the SPX based LS, the expanding ratio is [epsilon] = 3, and the number of offspring is [mu] = 10. The maximum number of function evaluations (FEs) is set to 240000. In the constraint handling technique, the equality constraints are converted to inequality constrains by a tolerance value [delta]. In the experiments, the tolerance value S is dynamically set as proposed in  and used in the [9, 12]. The tolerance value [delta] is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

The initial [delta](0) is set to 3. The initial value and the proportional decreasing factor [delta] are the same as those in . For each test function, 30 independent trials are executed, and the statistical results of 30 trials are recorded. The statistical results include the "best," "median," "mean," and "worst" of the obtained optimal solutions and the standard deviations (Std.).

5.2. Experimental Results and Comparison. The experimental results are compared with the state-of-the-art algorithms: Var2  and SAMSDE-2 , SR , ATMES , and CBBO-DM . Among the comparison

algorithms, Var2  used DE/rand/3/bin as search engine and used feasibility rules  as comparison criteria, ASMSDE-2 combines adaptively two DE mutation schemes as search engine and also used feasibility rules  as comparison criteria, SR employed ES as the search engine and designed stochastic ranking to make comparison, ATMES used ES as the search engine and proposed an adaptive tradeoff fitness as selection operator, and CBBO-DM combines the BBO and DE mutation operators together. It is mentioned here that our algorithms MDEDP, ATMES, Var2 , and SAMSDE-2  used 240000 FEs, while SR and CBBO-DM used 350000 FEs. The tolerance of equality constraints S for Var2  and SAMSDE-2  is 1.0E - 04, while it was set to 1.0E - 03 for CBBO-DM, and for MDEDP and ATMES, both tolerance values are dynamically decreased. For MDEDP, it is decreased from initial value 3 to 1.0E - 04 eventually, while for ATMES, it is decreased from initial value of 3 to 5.0E - 06 in the end.

Table 1 gives the results of six algorithms, our MDEDP, Var2 , SAMSDE-2 , SR , ATMES , and CBBO-DM  for the first 13 test instances since SR, ATMES, and CBBO-DM solved only the first 13 problems. The results and comparisons with Var2  and SAMSDE-2  for the remaining nine test instances are given in Table 2.

In order to illustrate the significant difference, the approximate two-sample t-tests between the comparison algorithms and our MDEDP have been done according to :

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (9)

where [[bar.y].sub.1] and [[bar.y].sub.2] denote the mean values, [S.sub.1] and [S.sub.2] are the standard deviations of the results obtained by the two algorithms, and [n.sub.1] and [n.sub.2] are the independent runs of two algorithms, respectively. The value of degrees of freedom is calculated as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

For the first 13 test instances, the t-tests are done between our MDEDP and three algorithms, SR, ATMES, and CBBO-DM, on 6 instances (g02, g05, g07, g10, and g13). The t-test is not done on other instances since the optimal solutions are found by at least three algorithms in all runs. The data used for ^-test is from the related references and the results of t-tests are calculated and showed in Table 3. In the table, "NA" means no available data in the related references, "-" means both approaches have obtained the optimal solutions in all runs on the given functions, and "NF" means no feasible solutions found in the algorithm. Superscript "a" means the difference between MDEDP and the compared algorithm with the corresponding degrees of freedom is significant at [alpha] = 0.05 and MDEDP is better than the compared algorithm, while superscript "b" means that the MDEDP is worse than the compared algorithm.

As Table 3 shows, most of the differences are significant except the differences between ATMES and MDEDP on problem g13, which illustrates that our MDEDP is competitive compared with other algorithms for the most widely used 13 benchmark problems.

For all the 22 test instances, the t-tests are done between our MDEDP and Var2, SAMSDE-2 on 12 instances (g02, g05, g07, g09, g10, g13, g14, g17, g18, g19, g21, and g23). The t-test is not done on other instances since the optimal solutions are found by at least two algorithms in all runs. The results of test are showed in Table 4, in which it is clear that most of the differences are significant except the differences between either one of the compared algorithms and MDEDP on problem g05. Therefore, it is concluded that our MDEDP is competitive with the other two state-of-the-art algorithms for the 22 test instances.

In summary, compared with other five algorithms for constrained optimization problems, MDEDP is very competitive.

In the proposed MDEDP, SPX based local search is incorporated to speed up the convergence of the algorithm. To test the efficiency of the SPX based local search, the comparisons are done on the most widely used 13 test instances between the proposed algorithm with local search and without local search (the algorithm without local search is denoted by DEDP). The parameters setting is the same as MDEDP with the exception of discarding the SPX based local search. It is mentioned here that the proposed algorithm without local search also performed 240000 FEs, which is the same as that of MDEDP. The results for DEDP are also showed in the last column of the Table 1. From Table 1, it is clear that DEDP can find nine optimal solutions (g03, g04, g05, g06, g08, g09, g11, g12, and g13) among 13 test instances, which shows that the used DE search engine is powerful and the proposed fitness function based on ASF can achieve a good balance between objective function and constraint violation. However, the DEDP is poor on the other four test instances which have difficulties of high dimension of variables, many local optimal solutions, and so forth. Compared with DEDP, MDEDP performs clearly better than that of the DEDP, which illustrates that the SPX based local search can obviously improve the efficiency of the DEDP.

5.3. Effect of the Weighting Vector. In this subsection, the effect of the weighting vector which reflects the preference is discussed through various experiments on the most widely used 13 test instances.

In stage one, that is, [rho] = 0, there is no feasible solution in the population, and the weighting vector [omega] is set to (0.1, 0.9). A big component of [[omega].sub.2] illustrates that, in this stage, the search biases to the constraint satisfaction are much more. Meanwhile, [[omega].sub.1] that is not set to zero means that the objective function should not be ignored absolutely.

In further experiments, the weighting vector is set to (0.2, 0.8), (0.3, 0.7), (0.4, 0.6), and (0.5, 0.5), and the results are shown in Table 5. The results are only for 10 out of the most widely used 13 test functions except for g02, g04, and g12 since the feasible regions for g02, g04, and g12 are relatively large which makes the populations seldom or not experience phase one and the value of w almost does affect the evolution. For convenient comparison, the results with [omega] = (0.1, 0.9) are also shown in Table 5. From Table 5, it can be seen that, with the four different weighting vectors, the proposed algorithm can converge consistently to the global optimum in 30 independent runs for the 10 test functions which all experience phase one at least once. Therefore, the proposed algorithm is not sensitive to the preset parameter vector w in stage one. The results and the above analysis demonstrated that MDEDP is robust.

In stage two, that is, 0 < [rho] < 1, there are feasible and infeasible solutions in the population at the same time. With the evolution, the feasible solution proportion increases accordingly. In this phase, parameter [[omega].sup.max.sub.1], the upper bound of the first component of [omega] is used to limit the bias to the objective function in evolution. In the numerical experiments, [[omega].sup.max.sub.1] = 0.5. The given upper bound means that the preference to the objective function is at most the same to the constraint satisfaction, which is the primary goal. Further experiments are performed on all test functions with different upper bound values, 0.4, 0.45, 0.6, and 0.7. The upper bound values of 0.4 and 0.45 represent that the preference to objective function is slightly less than that to the constraint satisfaction when there are more feasible solutions in the population, while upper bound values of 0.6 and 0.7 represent that the preference to objective function is slightly more than that to the constraint satisfaction. The experimental results are shown in Table 6. It can be seen from the results in Table 6 that MDEDP can obtain similar results with different upper bound values of the preference to objective function. From the final results to g02, we can see that the algorithm is the most robust with the upper bound 0.7. This is because the whole search space is almost feasible, and the search biasing to the objective function is helpful to find optimal solution and coincides with a larger value of the first component of the weight vector. The obtained results indicated that the different upper bounds have little effect on the efficiency and robustness of the proposed algorithm.

From the above experimental results, we can conclude that the proposed MDEDP is robust to find the optimal solutions for the test functions with the different preset parameters.

6. Conclusion

The constrained optimization problem is converted into a biobjective optimization problem first and then solved by a new designed memetic differential evolution algorithm with dynamic preference. DE is used as the search engine in global search, which is guided by a novel fitness function based on ASF used in multiobjective optimization. The fitness function ASF dynamically adjusted the preference to different objectives through the adaptive reference point and weighting vector. In local search, SPX is used as the search mechanism and concentrates on the neighborhood the best feasible and infeasible solutions constituted, which makes the algorithm approach the promising region in objective space and accelerates the convergence. Experimental results and comparison with the state-of-the-art algorithms demonstrate that the proposed algorithm is efficient on these standard test problems. Further experiments with different preset parameters also show the robustness of the proposed algorithm.

http://dx.doi.org/10.1155/2014/606019

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (no. 61272119).

References

 A. W. Mohamed and H. Z. Sabry, "Constrained optimization based on modified differential evolution algorithm," Information Sciences, vol. 194, pp. 171-208, 2012.

 T. P. Runarsson and X. Yao, "Stochastic ranking for constrained evolutionary optimization," IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 284-294, 2000.

 K. Deb, "An efficient constraint handling method for genetic algorithms," Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2-4, pp. 311-338, 2000.

 R. Farmani and J. A. Wright, "Self-adaptive fitness formulation for constrained optimization," IEEE Transactions on Evolutionary Computation, vol. 7, no. 5, pp. 445-455, 2003.

 B. Tessema and G. G. Yen, "An adaptive penalty formulation for constrained evolutionary optimization," IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, vol. 39, no. 3, pp. 565-578, 2009.

 D. Powell and M. M. Skolnick, "Using genetic algorithms in engineering design optimization with nonlinear constraints," in Proceedings of the 5th International Conference on Genetic Algorithms (ICGA '93), pp. 424-431, San Mateo, Calif, USA, 1993.

 Y.-R. Zhou, Y.-X. Li, Y. Wang, and L.-S. Kang, "Pareto strength evolutionary algorithm for constrained optimization," Journal of Software, vol. 14, no. 7, pp. 1243-1249, 2003.

 Z. Cai and Y. Wang, "A multiobjective optimization-based evolutionary algorithm for constrained optimization," IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 658-675, 2006.

 Y. Wang, Z. Cai, Y. Zhou, and W. Zeng, "An adaptive tradeoff model for constrained evolutionary optimization," IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 80-92, 2008.

 T. Takahama and S. Sakai, "Constrained optimization by the [epsilon] constrained differential evolution with gradient-based mutation and feasible elites," in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 1-8, July 2006.

 T. P. Runarsson and X. Yao, "Search biases in constrained evolutionary optimization," IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 35, no. 2, pp. 233-243, 2005.

 E. Mezura-Montes and C. A. Coello Coello, "A simple multi-membered evolution strategy to solve constrained optimization problems," IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 1-17, 2005.

 C.-L. Sun, J.-C. Zeng, and J.-S. Pan, "An improved vector particle swarm optimization for constrained optimization problems," Information Sciences, vol. 181, no. 6, pp. 1153-1163, 2011.

 A. Ghosh, S. Das, A. Chowdhury, and R. Giri, "An improved differential evolution algorithm with fitness-based adaptation of the control parameters," Information Sciences, vol. 181, no. 18, pp. 3749-3765, 2011.

 W. Gong, A. Fialho, Z. Cai, and H. Li, "Adaptive strategy selection in differential evolution for numerical optimization: An Empirical Study," Information Sciences, vol. 181, no. 24, pp. 5364-5386, 2011.

 M. Weber, F. Neri, and V. Tirronen, "A study on scale factor in distributed differential evolution," Information Sciences, vol. 181, no. 12, pp. 2488-2511, 2011.

 J. Lampinen, "A constraint handling approach for the differential evolution algorithm," in Proceedings of the Congress on Evolutionary Computation (CEC '2002), pp. 1468-1473, IEEE Press, 2002.

 C. Zhang, X. Li, L. Gao, and Q. Wu, "An improved electro-magnetism-like mechanism algorithm for constrained optimization," Expert Systems with Applications, vol. 40, no. 14, pp. 5621-5634, 2013.

 Y. Wang and Z. Cai, "Combining multiobjective optimization with differential evolution to solve constrained optimization problems," IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 117-134, 2012.

 M. Zhang, W. Luo, and X. Wang, "Differential evolution with dynamic stochastic selection for constrained optimization," Information Sciences, vol. 178, no. 15, pp. 3043-3074, 2008.

 E. Mezura-Montes, J. Velazquez-Reyes, and C. A. C. Coello, "Promising infeasibility and multiple offspring incorporated to differential evolution for constrained optimization," in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO '05), pp. 225-232, June 2005.

 G. Jia, Y. Wang, Z. Cai, and Y. Jin, "An improved ([mu] + [lambda])- constrained differential evolution for constrained optimization," Information Sciences, vol. 222, pp. 302-322, 2013.

 V. L. Huang, A. K. Qin, and P. N. Suganthan, "Self-adaptive differential evolution algorithm for constrained real-parameter optimization," in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 17-24, July 2006.

 S. M. Elsayed, R. A. Sarker, and D. L. Essam, "On an evolutionary approach for constrained optimization problem solving," Applied Soft Computing Journal, vol. 12, no. 10, pp. 3208-3227, 2012.

 I. Boussaid, A. Chatterjee, P. Siarry, and M. Ahmed-Nacer, "Biogeography-based optimization for constrained optimization problems," Computers & Operations Research, vol. 39, no. 12, pp. 3293-3304, 2012.

 R. Storn and K. Price, "Differential evolution-a simple and efficient adaptative scheme for global optimization over continuous spaces," Tech. Rep. TR-95-12, International Computer Science, Berkeley, Calif, USA, 1995.

 S. Tsutsui, M. Yamamura, and T. Higuchi, "Multi-parent recombination with simplex crossover in real-coded genetic algorithms," in Proceedings of the Genetic and Evolutionary Computing Conference, pp. 657-664, Morgan Kaufmann, 1999.

 K. Miettinen, Nonlinear Multiobjective Optimization, International Series in Operations Research & Management Science, 12, Kluwer Academic Publishers, Boston, Mass, USA, 1999.

 S. Ben Hamida and M. Schoenauer, "ASCHEA: new results using adaptive segregational constraint handling," in Proceedings of the Congress on Evolutionary Computation (CEC '2002), pp. 884-889, IEEE Press, 2002.

Ning Dong (1) and Yuping Wang (2)

(1) School of Mathematics and Statistics, Xidian University, Xi'an 710071, China

(2) School of Computer Science and Technology, Xidian University, Xi'an 710071, China

Correspondence should be addressed to Ning Dong; dongning@snnu.edu.cn

Received 9 February 2014; Revised 8 May 2014; Accepted 14 May 2014; Published 4 June 2014

```
TABLE 1: Statistical results obtained by MDEDP over 30 independent
runs and the comparison with Var2 , SAMSDE-2 , SR , ATMES
, and BBO-DM . The result in boldface indicates that the
proposed MDEDP obtained the global optimum. The bold-face values
mean they are optimal solutions. NA means not available in related
reference.

fcn/opt.           Status       MDEDP          Var2 

Best       -15.000#        -15.0000#
Median      -15.000#        -15.0000#
g01/-15.0000        Mean       -15.000#        -15.0000#
Worst       -15.000#        -15.0000#
Std.          0                0
Best      -0.803616        -0.803616
Median     -0.763716        -0.785259
g02/-0.803619       Mean      -0.757713        -0.779414
Worst      -0.616138        -0.729905
Std.      1.9E - 02        1.9E - 02
Best       -1.0005#         -1.0005#
Median      -1.0005#         -1.0005#
g03/-1.0005         Mean       -1.0005#         -1.0005#
Worst       -1.0005#         -1.0005#
Std.      2.9E - 07        1.3E - 05
Best    -30665.53867#    -30665.53867#
g04/-30665.53867   Median   -30665.53867#    -30665.53867#
Mean    -30665.53867#     -30662.69876
Worst    -30665.53867#     -30634.93399
Std.      1.4E - 11        8.7E + 00
Best     5126.49671#       5126.49674
Median    5126.49671#       5126.98662
g05/5126.49671      Mean     5126.49671#       5133.70682
Worst     5126.49671#       5217.94796
Std.      3.0E - 12        2.0E + 01
Best      -6961.814#      -6961.83188#
Median     -6961.814#      -6961.80882
g06/-6961.814       Mean      -6961.814#      -6937.41732
Worst      -6961.814#      -6804.36991
Std.      1.8E - 12        4.8E + 01
Best       24.306#          24.3069
Median      24.306#          24.3205
g07/24.306          Mean       24.306#          24.3289
Worst       24.306#          24.4085
Std.      4.6E - 06        2.5E - 02
Best      -0.095825#       -0.095825#
Median     -0.095825#       -0.095825#
g08/-0.095825       Mean      -0.095825#       -0.095825#
Worst      -0.095825#       -0.095825#
Std.      8.4E - 17            0
Best       680.630#         680.630#
Median      680.630#         680.633
g09/680.630         Mean       680.630#         680.634
Worst       680.630#         680.646
Std.      3.6E - 10        4.0E - 03
Best       7049.249         7049.570
Median      7049.257         7071.060
g10/7049.248        Mean       7049.258         7105.844
Worst       7049.311         7347.264
Std.      1.1E - 02        7.3E + 01
Best       0.7499#          0.7499#
Median      0.7499#          0.7499#
g11/0.7499          Mean       0.7499#          0.7499#
Worst       0.7499#          0.7499#
Std.      3.4E - 16            0
Best         -1#            -1.0000#
Median        -1#            -1.0000#
g12/-1.000          Mean         -1#            -1.0000#
Worst         -1#            -1.0000#
Std.          0                0
Best      0.0539415#       0.0539452
Median     0.0539415#       0.0546893
g13/0.0539415       Mean      0.0539415#       0.0602843
Worst      0.0539415#       0.0828200
Std.      1.3E - 12        9.9E - 03

fcn/opt.           Status   SAMSDE-2        SR 

Best      -15.0000#        -15.000#
Median     -15.0000#        -15.000#
g01/-15.0000        Mean      -15.0000#        -15.000#
Worst      -15.0000#        -15.000#
Std.          0           0.0E + 00
Best      -0.803619#      -0.803515
Median     -0.783053       -0.785800
g02/-0.803619       Mean      -0.774917       -0.781975
Worst      -0.729398       -0.726288
Std.       2.5E-02        2.0E - 02
Best       -1.0005#         -1.000
Median      -1.0005#         -1.000
g03/-1.0005         Mean       -1.0005#         -1.000
Worst       -1.0005#         -1.000
Std.      8.2E - 06       1.9E - 04
Best    -30665.53867#    -30665.539#
g04/-30665.53867   Median   -30665.53867#    -30665.539#
Mean    -30665.53867#    -30665.539#
Worst    -30665.53867#    -30665.539#
Std.          0           2.0E - 05
Best     5126.49671#       5126.497
Median     5126.49674       5127.372
g05/5126.49671      Mean      5126.63543       5128.881
Worst      5128.80623       5142.472
Std.      5.0E - 01       3.5E + 00
Best     -6961.83188#     -6961.814#
Median    -6961.83188#     -6961.814#
g06/-6961.814       Mean     -6961.83188#     -6875.940
Worst     -6961.83188#     -6350.262
Std.          0           1.6E + 02
Best       24.3069          24.307
Median      24.3287          24.357
g07/24.306          Mean       24.3330          24.374
Worst       24.3881          24.642
Std.      2.5E - 02       6.6E - 02
Best      -0.095825#      -0.095825#
Median     -0.095825#      -0.095825#
g08/-0.095825       Mean      -0.095825#      -0.095825#
Worst      -0.095825#      -0.095825#
Std.          0           2.6E - 17
Best       680.630#        680.630#
Median      680.631         680.641
g09/680.630         Mean       680.631         680.656
Worst       680.632         680.763
Std.      6.8E - 04       3.4E - 02
Best       7049.378        7054.316
Median      7105.992        7372.613
g10/7049.248        Mean       7112.012        7559.192
Worst       7191.295        8835.655
Std.      7.0E + 01       5.3E + 02
Best       0.7499#          0.750#
Median      0.7499#          0.750#
g11/0.7499          Mean       0.7499#          0.750#
Worst       0.7499#          0.750#
Std.          0           8.0E - 05
Best       -1.0000#       -1.000000#
Median      -1.0000#       -1.000000#
g12/-1.000          Mean       -1.0000#       -1.000000#
Worst       -1.0000#       -1.000000#
Std.          0           0.0E + 00
Best      0.0539415#       0.053957
Median     0.0539415#       0.057006
g13/0.0539415       Mean      0.0539417        0.067543
Worst      0.0539428        0.216915
Std.      3.0E - 07       3.1E - 02

fcn/opt.           Status   ATMES                BBO-DM 

Best      -15.000#                 -15#
Median     -15.000#                  NA
g01/-15.0000        Mean      -15.000#                 -15#
Worst      -15.000#                 -15#
Std.     1.6E - 14               8.2E - 14
Best     -0.803388               -0.803557
Median    -0.792420                  NA
g02/-0.803619       Mean     -0.790148               -0.802774
Worst     -0.756986               -0.792576
Std.     1.3E - 02               2.7E - 03
Best       -1.000                 -1.000
Median      -1.000                   NA
g03/-1.0005         Mean       -1.000                 -1.000
Worst       -1.000                 -1.000
Std.     5.9E - 05               6.0E - 16
Best    -30665.539#             -30665.539#
g04/-30665.53867   Median   -30665.539#                 NA
Mean    -30665.539#             -30665.539#
Worst    -30665.539#             -30665.539#
Std.     4.6E - 13               1.7E - 11
Best      5126.498               5126.498
Median     5126.776                  NA
g05/5126.49671      Mean      5127.648               5126.498
Worst      5135.256               5126.498
Std.     1.8E + 00               2.2E - 04
Best     -6961.814#             -6961.814#
Median    -6961.814#                 NA
g06/-6961.814       Mean     -6961.814#             -6961.814#
Worst     -6961.814#             -6961.814#
Std.     4.6E - 12               4.6E - 12
Best      24.306#                 24.326
Median      24.313                   NA
g07/24.306          Mean       24.316                 24.345
Worst       24.359                 24.378
Std.     1.1E - 02               1.3E - 02
Best     -0.095825#             -0.095825#
Median    -0.095825#                 NA
g08/-0.095825       Mean     -0.095825#             -0.095825#
Worst     -0.095825#             -0.095825#
Std.     2.8E - 17               2.8E - 17
Best      680.630#               680.630#
Median     680.633                   NA
g09/680.630         Mean      680.639                680.630#
Worst      680.673                680.630#
Std.     1.0E - 02               4.3E - 13
Best      7052.253               7059.802
Median     7215.357                  NA
g10/7049.248        Mean      7250.437               7075.832
Worst      7560.224               7098.254
Std.     1.2E + 02               8.5E + 00
Best       0.75#                  0.750#
Median      0.75#                    NA
g11/0.7499          Mean       0.75#                  0.750#
Worst       0.75#                  0.750#
Std.     3.4E - 04                   0
Best      -1.000#               -1.000000#
Median     -1.000#                   NA
g12/-1.000          Mean      -1.000#               -1.000000#
Worst      -0.994                -1.000000#
Std.     1.0E - 03                   0
Best      0.053950                  NA
Median     0.053952                  NA
g13/0.0539415       Mean      0.053959                  NA
Worst      0.053999                  NA
Std.     1.3E - 05                  NA

fcn/opt.           Status        DEDP

Best       -14.9999
Median      -14.9998
g01/-15.0000        Mean       -14.9999
Worst       -14.9998
Std.      3.9E - 05
Best      -0.604182
Median     -0.554816
g02/-0.803619       Mean      -0.553601
Worst      -0.511070
Std.      2.0E - 02
Best      -1.0005#
Median     -1.0005#
g03/-1.0005         Mean      -1.0005#
Worst      -1.0005#
Std.      9.4E - 14
Best    -30665.53867#
g04/-30665.53867   Median   -30665.53867#
Mean    -30665.53867#
Worst    -30665.53867#
Std.      1.1E - 11
Best     5126.49671#
Median    5126.49671#
g05/5126.49671      Mean     5126.49671#
Worst     5126.49671#
Std.      9.3E - 13
Best     -6961.814#
Median    -6961.814#
g06/-6961.814       Mean     -6961.814#
Worst     -6961.814#
Std.      1.9E - 02
Best        24.330
Median       24.340
g07/24.306          Mean        24.342
Worst        24.360
Std.      8.7E - 03
Best     -0.095825#
Median    -0.095825#
g08/-0.095825       Mean     -0.095825#
Worst     -0.095825#
Std.      2.8E - 17
Best      680.630#
Median     680.630#
g09/680.630         Mean      680.630#
Worst      680.630#
Std.      1.5E - 11
Best       7052.473
Median      7054.639
g10/7049.248        Mean       7054.623
Worst       7058.081
Std.      1.5E + 00
Best       0.7499#
Median      0.7499#
g11/0.7499          Mean       0.7499#
Worst       0.7499#
Std.      1.0E - 16
Best         -1#
Median        -1#
g12/-1.000          Mean         -1#
Worst         -1#
Std.          0
Best      0.0539415#
Median     0.0588605
g13/0.0539415       Mean      0.1586557
Worst      0.4461180
Std.      1.6E - 01

bold = indicated with #

TABLE 2: Statistical results obtained by MDEDP over 30 independent
runs and the comparison with Var2  and SAMSDE-2  for the
remaining 9 test instances. "NF" means no feasible solution is found.

Status     MDEDP       Var2       SAMSDE-2


g14/           Best      -47.76489#    -47.76206      -46.66291
-47.76489    Median     -47.76489#    -47.52454      -46.63655
Mean      -47.76487     -47.46548      -46.43386
Worst     -47.76478     -46.53409      -46.00211
Std.      2.9E - 05     3.0E-01        3.7E-01
g15/           Best     961.715022#   961.715022#    961.715022#
961.715022   Median    961.715022#   961.715022#    961.715022#
Mean     961.715022#   961.716603     961.715022#
Worst    961.715022#   961.757371     961.715022#
Std.      6.9E - 13    7.9E - 03       4.1E-08
g16/           Best     -1.9051553#   -1.9051553#    -1.9051553#
-1.9051553   Median    -1.9051553#   -1.9051553#    -1.9051553#
Mean     -1.9051553#   -1.9051553#    -1.9051553#
Worst    -1.9051553#   -1.9051553#    -1.9051553#
Std.      6.6E - 16    3.2E - 10           0
g17/           Best      8853.5339#   8853.5398       8853.5397
8853.5339    Median     8853.5339#   8949.0383       8927.5977
Mean      8858.4711    8937.0179       8914.0841
Worst     8927.5917    8956.7443       8941.2845
Std.      1.9E + 01    2.7E + 01       3.1E + 01
g18/           Best     -0.8660254#   -0.866025#      -0.866025#
-0.8660254   Median    -0.8660254#   -0.866024       -0.866024
Mean     -0.8405529    -0.866020       -0.866018
Worst    -0.6749814    -0.865983       -0.865959
Std.      6.6E - 02    9.8E - 06       1.4E - 05
g19/           Best      32.65559#    32.69440        32.67473
32.65559     Median     32.65559#    32.85972        32.88680
Mean      32.65559#    32.93035        32.92857
Worst     32.65567     33.46296        33.37313
Std.      1.6E - 05    2.1E-01         2.0E-01
g21/           Best      193.72451#      NF           308.90899
193.72451    Median     193.72451       NF           324.71234
Mean      220.35875       NF           324.10440
Worst     330.18271       NF           329.54888
Std.      5.4E + 01       NF           6.7E + 01
g23/           Best     -400.05510#    -382.522771   -374.201398
-400.05510   Median    -400.05510#    -191.134573   -262.684823
Mean     -400.05509    -199.903075    -265.241202
Worst    -400.05505     -6.556980     -176.606485
Std.      1.3E-05       9.7E + 01      5.3E + 01
g24/           Best     -5.5080133#   -5.5080133#    -5.5080133#
-5.5080133   Median    -5.5080133#   -5.5080133#    -5.5080133#
Mean     -5.5080133#   -5.5080133#    -5.5080133#
Worst    -5.5080133#   -5.5080133#    -5.5080133#
Std.      1.8E - 15        0              0

bold = indicated with #

TABLE 3: Results of the approximate two-sample t-test between the
other three algorithms and our MDEDP for six instances.

Algorithms    t-test        g02              g05

SR-MDEDP      [t.sub.o]   -[3.09#.sup.b]   [3.73#.sup.a]
f             45               30
ATMES-MDEDP   [t.sub.o]   -[4.42#.sub.b]   [3.50#.sup.a]
f             36               30
CBBO-MDEDP    [t.sub.o]   -[5.01#.sup.b]   [32.12#.sup.a]
f             30               30

Algorithms    t-test        g07              g09

SR-MDEDP      [t.sub.o]   [5.64#.sup.a]    [419#.sup.a]
f             30               30
ATMES-MDEDP   [t.sub.o]    [498#.sup.a]    [5.26#.sup.a]
f             30               30
CBBO-MDEDP    [t.sub.o]   [16.43#.sup.a]         --
f             30               --

Algorithms    t-test        g10              g13

SR-MDEDP      [t.sub.o]   [5.27#.sup.a]    [2.40#.sup.a]
f             30               30
ATMES-MDEDP   [t.sub.o]   [9.18#.sup.a]    2.04
f             30               30
CBBO-MDEDP    [t.sub.o]   [17.12#.sup.a]         NA
f             30               NA

bold = indicated with #

TABLE 4: Results of the approximate two-sample t-test between the
other two algorithms and our MDEDP for 12 instances.

Algorithms     t-test           g02         g05         g07

Var2-MDEDP     [t.sub.o]   -[2.80#.sup.b]   1.97   [5.02#.sup.a]
f             44          30         30
SAMSDE-MDEDP   [t.sub.o]   -[2.07#.sup.b]   1.52   [5.92#.sup.a]
f             51          30         30

Algorithms     t-test           g09             g10

Var2-MDEDP     [t.sub.o]   [5.48#.sup.a]   [4.25#.sup.a]
f            30              30
SAMSDE-MDEDP   [t.sub.o]   [8.05#.sup.a]   [4.91#.sup.a]
f            30              30

Algorithms     t-test           g13             g14

Var2-MDEDP     [t.sub.o]   [3.51#.sup.a]   [5.47#.sup.a]
f            30              30
SAMSDE-MDEDP   [t.sub.o]   [3.65#.sup.a]   [19.70#.sup.a]
f            30              30

Algorithms     t-test           g17             g18

Var2-MDEDP     [t.sub.o]   [13.03#.sup.a]  -[2.11#.sup.a]
f             53             30
SAMSDE-MDEDP   [t.sub.o]   [8.38#.sup.a]   -[2.11#.sup.a]
f             49             30

Algorithms     t-test           g19             g21

Var2-MDEDP     [t.sub.o]   [7.17#.sup.a]        --
f            30              --
SAMSDE-MDEDP   [t.sub.o]   [7.48#.sup.a]   [6.60#.sup.a]
f            30              57

Algorithms     t-test           g23

Var2-MDEDP     [t.sub.o]   [11.30#.sup.a]
f            30
SAMSDE-MDEDP   [t.sub.o]   [13.93#.sup.a]
f            30

bold = indicated with #

TABLE 5: Statistical results obtained by MDEDP for 10 among 13
test functions over 30 independent runs with different weighting
vectors in stage one.

fcn/opt.         Status      [omega] = 0.1, 0.9)   [omega] =
(0.2, 0.8)

g01/-15.0000      Best               -15               -15
Median              -15               -15
Mean               -15               -15
Worst              -15               -15
Std.               0             2.1E - 14
g03/-1.0005       Best             -1.0005           -1.0005
Median            -1.0005           -1.0005
Mean             -1.0005           -1.0005
Worst            -1.0005           -1.0005
Std.           2.9E - 07         2.4E - 06
g05/5126.49671    Best           5126.49671         5126.49671
Median          5126.49671         5126.49671
Mean           5126.49671         5126.49671
Worst          5126.49671         5126.49671
Std.           3.0E - 12         4.8E - 12
g06/-6961.814     Best            -6961.814         -6961.814
Median           -6961.814         -6961.814
Mean            -6961.814         -6961.814
Worst           -6961.814         -6952.814
Std.           1.8E - 12         1.8E - 12
g07/24.306        Best             24.306             24.306
Median            24.306             24.306
Mean             24.306             24.306
Worst            24.306             24.306
Std.           4.6E - 06         3.5E - 06
g08/-0.095825     Best            -0.095825         -0.095825
Median           -0.095825         -0.095825
Mean            -0.095825         -0.095825
Worst           -0.095825         -0.095825
Std.           8.4E - 17         8.4E - 17
g09/680.630       Best             680.630           680.630
Median            680.630           680.630
Mean             680.630           680.630
Worst            680.630           680.630
Std.           5.2E - 10         1.1E - 11
g10/7049.248      Best            7049.249           7049.258
Median           7049.257           7049.357
Mean            7049.258           7049.382
Worst           7049.311           7049.901
Std.           1.1E - 02         1.3E - 01
g11/0.7499        Best             0.7499             0.7499
Median            0.7499             0.7499
Mean             0.7499             0.7499
Worst            0.7499             0.7499
Std.           3.4E - 16         3.4E - 16
g13/0.0539415     Best           0.05395415         0.05395415
Median          0.05395415         0.05395415
Mean           0.05395415         0.05395415
Worst          0.05395415         0.05395415
Std.           1.3E - 12         6.1E - 11

fcn/opt.         [omega] = (0.3, 0.7)    [omega] =     [omega] =
(0.4, 0.6)     (0.2, 0.8)

g01/-15.0000             -15                -15            -15
-15                -15            -15
-15                -15            -15
-15                -15            -15
1.4E - 14          1.2E - 14      1.6E - 14
g03/-1.0005            -1.0005            -1.0005        -1.0005
-1.0005            -1.0005        -1.0005
-1.0005            -1.0005        -1.0005
-1.0005            -1.0005        -1.0005
3.3E - 06          4.0E - 06      1.3E - 06
g05/5126.49671        5126.49671         5126.49671     5126.49671
5126.49671         5126.49671     5126.49671
5126.49671         5126.49671     5126.49671
5126.49671         5126.49671     5126.49671
3.9E - 12          4.0E - 12      3.0E - 12
g06/-6961.814         -6961.814          -6961.814      -6961.814
-6961.814          -6961.814      -6961.814
-6961.814          -6961.814      -6961.814
-6961.814          -6961.814      -6961.814
1.8E - 12          1.8E - 12      1.9E - 12
g07/24.306              24.306             24.306         24.306
24.306             24.306         24.306
24.306             24.306         24.306
24.306             24.306         24.306
7.6E - 06          3.2E - 06      5.3E - 06
g08/-0.095825         -0.095825          -0.095825      -0.095825
-0.095825          -0.095825      -0.095825
-0.095825          -0.095825      -0.095825
-0.095825          -0.095825      -0.095825
8.4E - 17          8.4E - 17      8.4E - 17
g09/680.630            680.630            680.630        680.630
680.630            680.630        680.630
680.630             680.63        680.630
680.630            680.630        680.630
5.5E - 08          3.2E - 07      5.2E - 10
g10/7049.248           7049.252           7049.258       7049.249
7049.372           7049.344       7049.250
7049.405           7049.370       7049.258
7049.773           7049.785       7049.311
1.3E - 01          1.0E - 01      2.8E - 02
g11/0.7499              0.7499             0.7499         0.7499
0.7499             0.7499         0.7499
0.7499             0.7499         0.7499
0.7499             0.7499         0.7499
3.4E - 16          3.4E - 16      3.4E - 16
g13/0.0539415         0.05395415         0.05395415     0.05395415
0.05395415         0.05395415     0.05395415
0.05395415         0.05395415     0.05395415
0.05395415         0.05395415     0.05395415
4.4E - 12          3.8E - 10      1.3E - 12

TABLE 6: Statistical results obtained by MDE-DP over 30 independent
runs with different upper bound values of the first component in
weighting vector [omega] in stage two.

fcn/opt.         Status   [[omega].sup.max.  [[omega].sup.max.
sub.1] = 0.4       sub.1] = 0.45

Best          -15                -15
Median         -15                -15
g01/-15.0000      Mean          -15                -15
Worst          -15                -15
Std.       1.3E - 14          2.4E - 14
Best       -0.794882          -0.803617
Median      -0.757723          -0.771748
g02/-0.803619     Mean       -0.750477          -0.758338
Worst       -0.640313          -0.590475
Std.       4.0E - 02          4.4E - 02
Best        -1.0005            -1.0005
Median       -1.0005            -1.0005
g03/-1.0005       Mean        -1.0005            -1.0005
Worst        -1.0005            -1.0005
Std.       2.9E - 06          1.8E - 06
Best       30665.539          30665.539
Median      30665.539          30665.539
g04/30665.539     Mean       30665.539          30665.539
Worst       30665.539          30665.539
Std.       1.5E - 11          1.5E - 11
Best       5126.49671         5126.49671
Median      5126.49671         5126.49671
g05/5126.49671    Mean       5126.49671         5126.49671
Worst       5126.49671         5126.49671
Std.       4.7E - 12          4.0E - 12
Best       -6961.814          -6961.814
Median      -6961.814          -6961.814
g06/-6961.814     Mean       -6961.814          -6961.814
Worst       -6961.814          -6952.814
Std.       1.8E - 12          1.8E - 12
Best         24.306             24.306
Median        24.306             24.306
g07/24.306        Mean         24.306             24.306
Worst         24.307             24.306
Std.       1.2E - 04          8.5E - 06
Best       -0.095825          -0.095825
Median      -0.095825          -0.095825
g08/-0.095825     Mean       -0.095825          -0.095825
Worst       -0.095825          -0.095825
Std.       8.5E - 17          8.4E - 17
Best        680.630            680.630
Median       680.630            680.630
g09/680.630       Mean        680.630            680.630
Worst        680.630            680.630
Std.       1.7E - 12          1.2E - 12
Best        7049.260           7049.270
Median       7049.350           7049.364
g10/7049.248      Mean        7049.397           7049.410
Worst        7049.864           7049.954
Std.       1.3E - 02          1.4E - 01
Best         0.7499             0.7499
Median        0.7499             0.7499
g11/0.7499        Mean         0.7499             0.7499
Worst         0.7499             0.7499
Std.       3.4E - 16           3.4E -16
Best           -1                 -1
Median          -1                 -1
g12/-1            Mean           -1                 -1
Worst           -1                 -1
Std.           0                  0
Best       0.05395415         0.05395415
Median      0.05395415         0.05395415
g13/0.0539415     Mean       0.05395415         0.05395415
Worst       0.05395415         0.05395415
Std.       2.7E - 13          5.6E - 10

fcn/opt.         [[omega].sup.max.  [[omega].sup.max.  [[omega].sup.
sub.1] = 0.5       sub.1] = 0.6       max.sub.1]
= 0.7
-15                -15              -15
-15                -15              -15
g01/-15.0000           -15                -15              -15
-15                -15              -15
0              1.8E - 14        1.8E - 14
-0.803616          -0.803615        -0.803613
-0.763716          -0.771840        -0.773241
g02/-0.803619       -0.757713          -0.761655        -0.764268
-0.616138          -0.601025        -0.647718
1.9E - 02          3.8E - 02        3.5E - 02
-1.0005            -1.0005          -1.0005
-1.0005            -1.0005          -1.0005
g03/-1.0005          -1.0005            -1.0005          -1.0005
-1.0005            -1.0005          -1.0005
2.9E - 07          2.7E - 06        2.9E - 06
30665.539          30665.539        30665.539
30665.539          30665.539        30665.539
g04/30665.539       30665.539          30665.539        30665.539
30665.539          30665.539        30665.539
1.4E - 11          1.5E - 11        1.5E - 11
5126.49671         5126.49671       5126.49671
5126.49671         5126.49671       5126.49671
g05/5126.49671      5126.49671         5126.49671       5126.49671
5126.49671         5126.49671       5126.49671
3.0E - 12          4.0E - 12        4.8E - 12
-6961.814          -6961.814        -6961.814
-6961.814          -6961.814        -6961.814
g06/-6961.814       -6961.814          -6961.814        -6961.814
-6961.814          -6961.814        -6961.814
1.8E - 12          1.8E - 12        1.8E - 12
24.306             24.306           24.306
24.306             24.306           24.306
g07/24.306            24.306             24.306           24.306
24.306             24.307           24.307
4.6E - 06          6.1E - 05        1.4E - 04
-0.095825          -0.095825        -0.095825
-0.095825          -0.095825        -0.095825
g08/-0.095825       -0.095825          -0.095825        -0.095825
-0.095825          -0.095825        -0.095825
8.4E - 17          8.4E - 17        8.4E - 17
680.630            680.630          680.630
680.630            680.630          680.630
g09/680.630          680.630            680.630          680.630
680.630            680.630          680.630
3.6E - 10          6.2E - 12        1.1E - 08
7049.249           7049.259         7049.257
7049.257           7049.328         7049.356
g10/7049.248         7049.258           7049.360         7049.370
7049.311           7049.703         7049.696
1.1E - 02          9.6E - 02        7.4E - 02
0.7499             0.7499           0.7499
0.7499             0.7499           0.7499
g11/0.7499            0.7499             0.7499           0.7499
0.7499             0.7499           0.7499
3.4E - 16          3.4E - 16        3.4E - 16
-1                 -1               -1
-1                 -1               -1
g12/-1                  -1                 -1               -1
-1                 -1               -1
0                  0                0
0.05395415         0.05395415       0.05395415
0.05395415         0.05395415       0.05395415
g13/0.0539415       0.05395415         0.05395415       0.06163874
0.05395415         0.05395415       0.43880261
1.3E - 12          5.6E - 10        5.4E - 02
```
Title Annotation: Printer friendly Cite/link Email Feedback Research Article Dong, Ning; Wang, Yuping Journal of Applied Mathematics Report Jan 1, 2014 11475 A multiagent transfer function neuroapproach to solve fuzzy Riccati differential equations. Stability analysis of Runge-Kutta methods for nonlinear functional differential and functional equations. Evolutionary algorithms Mathematical optimization Memetics Optimization theory