Printer Friendly

A Hybrid Pathfinder Optimizer for Unconstrained and Constrained Optimization Problems.

1. Introduction

The main characteristics of metaheuristic algorithm are that there are few parameters and operators. It is easy to apply them to actual problems. Every metaheuristic algorithm has its advantages and disadvantages. For instance, artificial bee colony (ABC) algorithm is a relatively new metaheuristic algorithm inspired by the foraging behaviors of honey bee colony [1]. Because ABC is easy to implement, has few control parameters, and possesses better optimizing performance [2], ABC has been successfully applied to solve optimization problems [3, 4]. However, with the increase of the dimensionality of the search space, ABC has a poor convergence behavior. The reason for that is because the ABC algorithm relies on the exchange of information between individuals. But each individual exchanges information on only one dimension with a random neighbor in each searching process. Yang carries out a critical analysis of the ABC algorithm by analyzing the way to mimic evolutionary operators [5]. In essence, operators in ABC belong to the mutation operator. So ABC shows a slower convergence speed. Like ABC algorithm, artificial butterfly optimization (ABO) algorithm is also an algorithm to simulate biological phenomena. The ABO algorithm simulates mate-finding strategy of some butterfly species and is tested on various benchmarks [6]. However, "No free lunch" theorems [7] suggest that one algorithm could not possibly show the best performance for all problems. Many strategies including improving existing algorithms or studying new algorithms can get better optimization effects. These strategies include opposition-based learning, chaotic theory, topological structure-based method, and hybridizing strategy. The strategy of hybridizing heterogeneous biological-inspired algorithms is a good way to balance the exploration and exploitation [8]. In order to add the diversity of the bat swarm, a hybrid HS/BA method adding pitch adjustment operation in HS to the BA method is proposed [9]. The hybridization of nature-inspired algorithms evolved as a solution necessary in overcoming certain shortcomings observed during the use of classical algorithms [10]. Good convergence requires clever exploitation at the right time and at the right place, which is still an open problem [11].

Pathfinder algorithm (PFA) is a relatively new metaheuristic algorithm inspired by the collective movement of animal group and mimics the leadership hierarchy of swarms to find best food area or prey [12]. PFA provides superior performance in some optimization problems. However, when the dimension of a problem is extremely increased, the performance of this method decreases because PFA mainly relies on mathematical formulas. The strategy of hybridizing heterogeneous biological-inspired algorithms can avoid the shortcomings of single algorithm because of increasing the individual information exchange. The differential evolution (DE) algorithm which is proposed by Storn and Price [13] performs very well in convergence [14]. In particular, DE has a good performance on searching the local optima and good robustness [15]. In view of the fast convergence speed of DE, the mutation operator in DE is incorporated into the PFA. Then, a hybrid pathfinder algorithm is proposed in this paper.

The rest of the paper is organized as follows. Section 2 will introduce the canonical PFA. Section 3 will present HPFA in detail. Section 4 will give the details of the experiment for unconstrained problems. The experiment results are also presented and discussed in this section. Section 5 introduces the data clustering problem and how the HPFA is used for clustering. Section 6 will give the details of the experiment for constrained problems. Section 7 will give the details of the experiment for engineering design problems. Section 8 gives the conclusions.

2. The Canonical Pathfinder Algorithm

The canonical PFA mimics the leadership hierarchy of animal group to find best prey. In the PFA, the individual with the best fitness is called pathfinder. The rest members of the swarm which are called followers in this paper follow the pathfinder. The PFA includes three phases: initialization phase, pathfinder's phase, and followers' phase.

In the initialization phase, the algorithm randomly produces a number of positions according to equation (1) in the search range. After that, the fitness values of the positions are calculated. The individual with the best fitness is selected as the pathfinder:

[x.sub.i,j] = [x.sup.min.sub.j] + rand [0,1]([x.sup.max.sub.j] - [x.sup.min.sub.j]), (1)

where i [member of] [1, 2, ..., SN] and j [member of] [1, 2, ..., D]. SN is the number of swarm. D is the number of parameters of the optimization problem. [x.sup.min.sub.j] is the lower boundary value of the jth parameter and [x.sup.max.sub.j] is the upper boundary value of the jth parameter.

In the pathfinder's phase, the position of the pathfinder is updated using equation (2). A greedy selection strategy is employed by comparing the fitness value of the new position of the pathfinder and the old one:

[x.sup.k+1.sub.p] = [x.sup.k.sub.p] + 2[r.sub.3]([x.sup.k.sub.p] - [x.sup.k-1.sub.p]) + A, (2)

where [x.sub.p] is the position vector of the pathfinder, k is the current iteration, and [r.sub.3] is a random vector uniformly generated in the range of [0, 1]. A is generated using the following equation:

A = [u.sub.2][e.sup.-2k/E], (3)

where [u.sub.2] is a random vector range in [-1, 1], k is the current iteration, and E is the maximum number of iterations.

In the followers' phase, the position of each follower is updated using equation (4). A greedy selection strategy is employed by comparing the fitness value of the new position of the follower and the old one. If the fitness of the follower with the best fitness is better than that of the pathfinder, the pathfinder is replaced with the follower:

[x.sup.k+1.sub.i] = [x.sup.k.sub.i] + [R.sub.1]([x.sup.k.sub.j] - [x.sup.k.sub.i]) + [R.sub.2]([x.sup.k.sub.p] - [x.sup.k.sub.i]) + [epsilon], (4)

[R.sub.1] = [alpha][r.sub.1], [R.sub.1] = [beta][r.sub.2], (5)

[epsilon] = (1 - k/E)[u.sub.1][D.sub.ij, (6)

[D.sub.ij] = [x.sub.i] - [x.sub.j], (7)

where [x.sub.i] is the position vector of the ith follower, [x.sub.j] is the position vector of the jth follower, k is the current iteration, and E is the maximum number of iterations. [r.sub.1] and [r.sub.2] are random values generated in the range of [0,1]. [alpha] and [beta] are random values generated in the range of [1, 2], and [D.sub.ij] is the distance between the ith follower and the jth follower.

The termination condition of the PFA may be the maximum cycles or the maximum function evaluation.

3. The Hybrid Pathfinder Algorithm

The commonly used hybrid methods mainly include series and parallel. The series method refers to the optimization operation for all members of swarm in the evolution of each generation. The series method is used in the proposed hybrid algorithm. In DE, the differential mutation operator is the main operation. In view of the fast convergence speed of DE, the mutation operator in DE is incorporated into the PFA to form a new hybrid pathfinder algorithm (HPFA). In HPFA, the rest parts are the same as the canonical PFA except a mutation phase is added after the followers' phase. The pseudocode of HPFA is listed in Algorithm 1. The steps of the mutation phase are given below. For each follower [x.sub.i] in the swarm, do the following steps:

Step 1: select three different followers [x.sub.r], [x.sub.p], and [x.sub.q] from the followers. The three values r, p, and q are not equal to i.

Step 2: for each dimension in D, produce a new position vector depending on CR. CR is a probability in the range of [0, 1]. The new position vector is produced according to equation (8). The new position vector [v.sub.ij] is determined by changing one dimension of [x.sub.i] and is set to its boundary value if exceeding its predetermined boundaries:

[v.sub.ij] = [x.sub.rj] + F([x.sub.pj] - [x.sub.qj]), (8)

where i, r, p,q are four different integers generated by random permutation, F is the differential weight in the range of [0,2]. j is a randomly selected dimension index between [1, D].

Step 3: calculate the fitness of the new position vector.

Step 4: a greedy selection strategy is employed by comparing the fitness value of the new position vector and the original one. If the fitness of the new position vector is better than the original one, it will replace the original one. Otherwise, the original one does not make any change.

4. Unconstrained Benchmark Problems

For the ease of visualization, we have implemented all algorithms using Matlab for various test functions. In order to compare the different algorithms fairly, we use a number of function evaluations (FEs) as a measure criterion in this paper.

4.1. Benchmark Functions. Evolutionary algorithms are usually experimentally assessed through various test problems because an analytical assessment of their behavior is very complex [16]. The twenty-four benchmark functions are widely adopted by other researchers to test their algorithms in many works [2, 17, 18]. In this paper, all functions used their standard ranges. These benchmark functions totaling twelve diverse and difficult minimization problems comprise unimodal continuous functions ([f.sub.1] - [f.sub.8]), multimodal continuous functions ([f.sub.9] - [f.sub.16]), and composition functions ([f.sub.17] - [f.sub.24]). The formulas of these functions are presented in Table 1. Functions [f.sub.13] - [f.sub.16] are four rotated functions employed in Liang's work [19]. In the rotated functions, a rotated variable y, which is produced by the original variable x left multiplied an orthogonal matrix, is used to calculate the fitness (instead of x). The orthogonal matrix is generated according to Salomon's method [20]. Functions [f.sub.17] - [f.sub.24] are eight composition functions.

These composition functions were specifically designed for the competition and comprise the sum of three of five unimodal and/or multimodal functions, leading to very challenging properties: multimodal, nonseparable, asymmetrical, and with different properties around different local optima.

4.2. Parameter Study. The choice of parameters can have an effect on the performance of an algorithm. QPSO has demonstrated the high potential for setting parameters of optimization methods [21]. Computational intelligence methods have demonstrated their ability to monitor complex large scale systems, but the selection of optimal parameters for efficient operation is very challenging [22]. The parameter CR and parameter F are two parameters in HPFA. In order to analyze the impact of the two parameters, we do the following experiments. In all experiments, the population size of all algorithms was 100. The maximum evaluation count on dimensions 20 is 100,000.

Four continuous benchmark functions, Sphere 20D, Zakharov 20D, Sumsquares 20D, and Quadric 20D are employed to investigate the impact of parameter CR and parameter F. Set CR [ratio.sub.e] and F equal to different values and all the functions run 20 sample times. It is worth noting that the interval of CR [ratio.sub.e] and F is continuous and has numerous values. Here, three different values of the two parameters are used. The experimental results in terms of mean values and standard deviation of the optimal solutions over 30 runs are listed in Tables 2-5. From the results, we can find that HPFA with CR [ratio.sub.e] equal to 0.9 and F equal to 0.1 performs best on all four functions. According to the results, we chose CR equal to 0.9 and F equal to 0.1 for the next experiments.

4.3. Comparison with Other Algorithms. In order to compare the performance of HPFA, PFA [12], differential evolution (DE) [13], canonical PSO with constriction factor (PSO) [23], and cooperative PSO (CPSO) [24] were employed for comparison. PSO is a classical population-based paradigm simulating the foraging behavior of social animals. CPSO is a cooperative PSO model, cooperatively coevolving multiple PSO subpopulations. In addition, a set of twelve well-known benchmark functions were used in this experiment.

4.3.1. Experiment Sets. The population size of all algorithms was 100. The maximum evaluation count on dimensions 30 is 100,000. In order to do meaningful statistical analysis, each algorithm runs for 30 times and takes the mean value and the standard deviation value as the final result. For the specific parameters for comparison algorithms, we follow parameter settings of the original literature studies. For CPSO and PSO, the learning rates C1 and C2 were both set as 2.05. The constriction factor X = 0.729. The split factor for CPSO is equal to the dimensions. In DE, single-point crossover is employed, the crossover rate is 0.95, and F is 0.1.

All algorithms were implemented in Matlab R2010a using a computer with Intel Core i5-2450M CPU, 2.5 GHz, 2 GB RAM. The operating system of the computer is Windows7.

4.3.2. Experimental Results and Analysis. The experimental results, including the mean and the standard deviation of the function values obtained by the five algorithms with 30 dimensions, are listed in Table 6. The best values obtained on each function are marked as bold. Rank represents the performance order of the five algorithms on each benchmark function. It is obvious that HPFA performed best on most functions. The mean best function value profiles of the five algorithms with 30 dimensions are shown in Figure 1.

(1) Continuous Unimodal Functions. On Sphere function, the performance order for the five intelligent algorithms is HPFA > PFA > DE > CPSO > PSO. The result achieved by HPFA was improved continually and got the best value, seen from Figure 1(a). The performance of DE, PSO, and CPSO deteriorates in optimizing this function. HPFA has very strong solving performance on Sphere function. HPFA and these three algorithms differ by about 30 orders of magnitude of solution quality.
Algorithm 1: Pseudocode of HPFA.

(1) Initialization phase:
   Calculate the fitness of each member of swarm;

(2) repeat

(3)       Pathfinder's phase
            Produce a new position vector according to equation (2);
            Calculate the fitness of the new position vector;
            Apply greedy selection;

(4)       Followers' phase
            For each follower
              Produce a new position vector according to equation (4).
              Calculate the fitness of the new position vector;
              Apply greedy selection;
          End for

(6)     Mutation phase
          For each follower
            Select three followers [x.sub.r],[x.sub.p],[x.sub.q]
            for d in D
              select one dimension randomly
              if (rand() < CR)
                Produce a new position vector according to equation
                  (8);
              End if
            End for
          Calculate the fitness of the new position vector;
        Apply greedy selection;
      End for

(7)  Memorizes the best food source found so far;

     Pathfinder = best member

(8) until Termination Condition


On Sinproblem function, the performance order for the six intelligent algorithms is HPFA > DE > CPSO > PFA > PSO. The performance of HPFA is much similar to it on Sphere. The result achieved by HPFA was improved continually. PFA and PSO converged very fast at the beginning and then trapped in local minimum. CPSO and DE converged continually, but the speed of convergence was slow, seen from Figure 1(d). The performance of PFA, PSO, CPSO, and DE deteriorates in optimizing this function.

On Sumsquares function, the performance order for the five intelligent algorithms is HPFA > PFA > DE > CPSO > PSO. The performance of HPFA is much similar to it on Sphere and Sinproblem. The result achieved by HPFA was improved continually. DE converged slowly, seen from Figure 1(e). The performance of PSO and CPSO deteriorates in optimizing this function.

On Schwefel2.22 function, the performance order for the five intelligent algorithms is HPFA > PFA > DE > CPSO > PSO. The performance of PSO and CPSO deteriorates in optimizing this function. The result achieved by PFA and HPFA was improved continually. Finally, HPFA got better results than PFA, seen from Figure 1(h).

(2) Continuous Multimodal Functions. On Ackley function, the performance order for the five intelligent algorithms is HPFA > DE > CPSO > PFA > PSO. The Ackley function poses a risk for optimization algorithms, so many algorithms are trapped in one of its many local minima. The Ackley function is widely used for testing optimization algorithms. The performance of DE, PSO, CPSO, and PFA deteriorates in optimizing this function. HPFA has a much stronger global searching ability, seen from Figure 1(k). The solution of HPFA is about 11 orders higher than that of DE.

The multimodal functions [f.sub.13]- [f.sub.16] are regarded as the most difficult functions to optimize since the number of local minima increases exponentially as the function dimension increases.

On Rot_rastrigin function, the performance order for the five intelligent algorithms is HPFA > PFA > PSO > DE > CPSO. The result achieved by HPFA was improved continually and got the best result, seen from Figure 1(m). PFA and PSO converged to a local minimum value at about 40,000 FEs. CPSO and DE perform worse than PSO and PFA.

On Rot_schwefel function, the performance order for the five intelligent algorithms is HPFA > PFA > DE > CPSO > PSO. CPSO and PSO converged fast at first, but it becomes trapped in a local minimum very soon. DE converged continually, but the speed of convergence was slow. Finally, HPFA got better results than PFA, seen from Figure 1(n).

On Rot_ackley function, the performance order for the five intelligent algorithms is HPFA > DE > PFA > PSO > CPSO. The solution of HPFA is about 14 orders higher than that of DE. At the very beginning, PFA, CPSO, DE, and PSO converged very fast and then trapped in local minimum. The result achieved by HPFA was improved continually and got the best result, seen from Figure 1(o).

On Rot_griewank function, the performance order for the five intelligent algorithms is HPFA > PFA > DE > PSO > CPSO. CPSO, PSO, and DE converged very slowly. HPFA and PFA converged continually and then trapped in local minimum, but HPFA performed better than PFA, seen from Figure 1(p).

(3) Composition Function. On Composition Function 4, Composition Function 5, Composition Function 7, and Composition Function 8, HPFA got the best result.

From the above analysis, we can observe that the ability of exploiting the optimum of HPFA is very strong. HPFA seemed to have the ability of continual improving especially on Sphere, Sinproblem, Sumsquares, Schwefel2.22, Ackley, Rotated Rastrigin, Rotated Schwefel, Rotated Ackley, and Rotated Griewank.

4.4. Statistical Analysis. It is obvious that HPFA got the best ranking with a dimension of 30. Statistical evaluation of experimental results has been considered an essential part of validation of new intelligent methods. The Iman-Davenport and Holm tests are nonparametric statistical methods and used to analyze the behaviors of evolutionary algorithms in many recent works. The Iman-Davenport and Holm tests are used in this section. Details of the two statistical methods are introduced in reference [25]. The results of the Iman-Davenport test are shown in Table 6. The values are distributed according to F-distribution with 4 and 92 degrees of freedom. The critical values are looked up in the F-distribution table with a level of 0.05. As can be seen in Table 7, the Iman-Davenport values are larger than their critical values, which means that significant differences exist among the rankings of the algorithms.

Holm test was employed as a post hoc procedure. HPFA was chosen as the control algorithm. The results of Holm tests are given in Table 8. The [alpha]/i values listed in the tables are with a level of 0.05.

HPFA got the best ranking and is the control algorithm. As seen in Table 8, the p values of PSO, CPSO, DE, and PFA are smaller than their [alpha]/i values, which means that equality hypotheses are rejected and significant differences exist between these five algorithms and the control algorithm.

4.5. Algorithm Complexity Analysis. In many heuristic algorithms, most of the computation is spent on fitness evaluation in each generation. The computation cost of one individual is associated with the test function complexity. It is very difficult to give a brief analysis in terms of time for all algorithms. Through directly evaluating the algorithmic time response on different benchmark functions ([f.sub.1]-[f.sub.24]), the total computing time for 30 sample runs of all algorithms is given in Figure 2. From the results, it is observed that CPSO takes the most computing time in all compared algorithms. PSO takes the least computing time in all compared algorithms. In summary, it is concluded that, compared with other algorithms, HPFA requires less computing time to achieve better results.
Algorithm 2: Pseudocode of fitness calculation.

Decode [X.sub.1] to w cluster centers following equation (12)

Calculate Euclidean distance between all data objects and clustering
centers following equation (10)

Distribute the data to the nearest clustering center

Calculate the total within-cluster variance following equation (9)

Return fitness


5. Application to Data Clustering

Data clustering is a kind of typical unsupervised learning, which is used to divide the samples of unknown categories. Clustering algorithm is widely used in banking, retail, insurance, medical, military, and other fields. Many clustering algorithms including hierarchical methods, partitioning methods, and density-based methods are proposed. In this paper, we mainly focus on partitioning clustering. Given a set of n data objects and the number of clusters w to be formed, the partitioning method divides the set of objects into w parts. Each partition represents a cluster. The final clustering will optimize a partition criterion, so that the objects in a cluster are similar, while the objects in different clusters are not. Generally, the total mean square quantization error (MSE) is used as the standard measure function for partitioning. Let X = ([x.sub.1],[x.sub.2], ..., [x.sub.n]) be a set of n data and C = (c,c,..., [c.sub.w]) be a set of w clusters. The following equation gives the definition of MSE. Minimizing this objective function is known to be an NP-hard problem (even for K = 2) [26]:

Perf (X,C) = [n.summation over (i=1)] min{[x.sub.i] - [c.sup.2.sub.j] | j = 1,2, ..., w}, (9)

[x.sub.i] - [c.sub.j] = [square root of ([p.summation over (m=1) [([x.sub.i,m] - [c.sub.j,m]).sup.2])], (10)

where w is the number of clusters i [member of] [1,n]. [c.sub.j] denotes a clustering center. n denotes the size of the dataset. Each data object [x.sub.i] in the dataset has p features. [x.sub.i] - [c.sub.j] denotes the Euclidean distance between [x.sub.i] and [c.sub.j].

5.1. The HPFA Algorithm for Data Clustering. In HPFA for clustering, each individual denotes a set of cluster centers according to equation (11). According to equation (12), each artificial butterfly can be decoded to a cluster center:

[X.sub.i] = {[x.sub.1], [x.sub.2], ..., [x.sub.p], [x.sub.p+1], ... [x.sub.w*p]}, (11)

where w is the number of clusters and p is the number of features of the data clustering problem:

[C[x.sub.m] = {[x.sub.(m- 1)*p+1], [x.sub.(m - 1)*p+2], ..., [m.sub.m*p]}, m = 1, 2, ..., w. (12)

According to equation (9), the fitness of each individual can be calculated. Algorithm 2 gives the main steps of the fitness function.

5.2. Experiment Sets. To verify the performance of the HPFA algorithm for data clustering, PFA, CPSO, and PSO are used to compare on several datasets, including Glass, Wine, Iris, and LD. These datasets are selected from the UCI machine learning repository [27].

In order to provide meaningful statistical analyses, each algorithm is run 30 times independently. The experimental results include the mean value and the standard deviation value. The population size of the four algorithms is set to 20. The maximum number of evaluations is 10000. Parameters for HPFA, PFA, CPSO, and PSO are the same with ones in Section 4.

5.3. Results and Analysis. Table 9 gives the results obtained by HPFA, PFA, CPSO, and PSO. Figure 3 shows the mean minimum total within-cluster variance profiles of HPFA, PFA, CPSO, and PSO.

The Glass dataset consists of 214 instances characterized by nine attributes. There are two categories in the data. As seen from Figure 3(a), CPSO converged quickly from the beginning and trapped a local minimum. HPFA and PFA converged continually.

The Wine dataset consists of 178 objects characterized by thirteen features. There are three categories in the data. As seen from Figure 3(b), CPSO and PSO converged quickly from the beginning and trapped a local minimum. PFA and HPFA converged continually before about 1000 FEs.

The Iris dataset consists of 150 objects characterized by four features. There are three categories in the data. As seen from Figure 3(c), CPSO converged more quickly and trapped a local minimum obviously. PSO converged slowly.

The LD dataset consists of 345 objects characterized by six features. There are two categories. With the LD dataset, CPSO trapped a local minimum obviously at the very beginning. PSO converged slowly, but PSO got better results than CPSO, as seen from Figure 3(d).

The performance of HPFA and PFA is much similar to Wine, Iris, and LD, but HPFA got the best result. Experimental results given in Table 9 show that HPFA outperforms the other clustering algorithms in terms of the quality of the solutions for four datasets including Glass, Wine, Iris, and LD.

6. Constrained Benchmark Problems

Experimental sets are as follows: the population size was 100 for HPFA. In order to do meaningful statistical analyses, each algorithm runs 25 times and takes the mean value and the standard deviation value as the final result. "FEs," "SD," and "NA" stand for number of function evaluations, standard deviation, and not available, respectively. The mathematical formulations for constrained benchmark functions (problems 1-4) are given in Appendixes A-D.

6.1. Constrained Problem 1. In order to compare the performance of HPFA on constrained problem 1 (see Appendix A), WCA [28], IGA [29], PSO [30], CPSO-GD [31], and CDE [32] were employed for comparison. Table 10 gives the best results obtained by HPFA, WCA, and IGA. Table 11 gives the comparison of statistical results obtained from various algorithms for constrained problem 1. As shown in Table 11, in terms of the number of function evaluations, HPFA shows superiority to other algorithms.

6.2. Constrained Problem 2. In order to compare the performance of HPFA on constrained problem 2 (see Appendix B), WCA [28], PSO [30], PSO-DE [30], GA1 [33], HPSO [34], and DE [35] were employed for comparison. Table 12 gives the best results obtained by GA1, WCA, and HPFA. Table 13 gives the comparison of statistical results obtained from various algorithms for constrained problem 2. As shown in Table 13, HPFA offered the best solution quality in less number of function evaluations for this problem. The proposed HPFA reached the best solution (-30665.5386) in 15.000 function evaluations.

6.3. Constrained Problem 3. In order to compare the performance of HPFA on constrained problem 3 (see Appendix C), WCA [28], PSO [30], PSO-DE [30], DE [35], and CULDE [36] were employed for comparison. Table 14 gives the best results obtained by GA1, WCA, and HPFA. Table 15 gives the comparison of statistical results obtained from various algorithms for constrained problem 3. As shown in Table 15, HPFA reached the best solution (-0.999989) in 100.000 function evaluations.

6.4. Constrained Problem 4. In order to compare the performance of HPFA on constrained problem 4 (see Appendix D), WCA [28], HPSO [34], PESO [37], and TLBO [38] were employed for comparison. Table 16 gives the best results obtained by WCA and HPFA. Table 17 gives the comparison of statistical results obtained from various algorithms for constrained problem 4. As shown in Table 17, HPFA reached the best solution (-1) in 5,000 function evaluations which is considerably less than other compared algorithms.

7. Engineering Design Problems

7.1. Three-Bar Truss Design Problem. In order to compare the performance of HPFA on the three-bar truss design problem (see Appendix E), WCA [28] and PSO-DE [30] were employed for comparison. Table 18 gives the best results obtained by PSO-DE, WCA, and HPFA. The comparison of obtained statistical results for HPFA with previous studies including WCA and PSO-DE is presented in Table 19. As shown in Table 19, HPFA obtained the best mean value in 10,000 function evaluations which is superior to PSO-DE.

7.2. Speed Reducer Design Problem. In order to compare the performance of HPFA on speed reducer design problem (see Appendix F), WCA [28], PSO-DE [30], and HEAA [39] were employed for comparison. Table 20 gives the best results obtained by HPfA, PSO-DE, WCA, and HEAA. The comparison of obtained statistical results for HPFA with previous studies including WCA, PSO-DE, and HEAA is presented in Table 21. As shown in Table 21, HPFA obtained the best mean value in 11,000 function evaluations which is superior to other considered algorithms.

7.3. Pressure Vessel Design Problem. In order to compare the performance of HPFA on pressure vessel design problem (see Appendix G), WCA [28], PSO [31], CPSO [31],and GA3 [40] were employed for comparison. Table 22 gives the best results obtained by WCA, HPFA, CPSO, and GA3. The comparison of obtained statistical results for HPFA with previous studies including WCA, CPSO, and PSO is presented in Table 23. As shown in Table 23, HPFA obtained better mean value than PSO in 25,000 function evaluations.

7.4. Tension/Compression Spring Design Problem. In order to compare the performance of HPFA on tension/compression spring design problem (see Appendix H), WCA [28], CPSO [31], and GA3 [40] were employed for comparison. Table 24 gives the best results obtained by WCA, HPFA, CPSO, and GA3. The comparison of obtained statistical results for HPFA with previous studies including WCA, CPSO, and GA3 is presented in Table 25. As shown in Table 25, HPFA obtained the best mean value in 22,000 function evaluations which is superior to WCA, CPSO, and GA3.

7.5. Welded Beam Design Problem. In order to compare the performance of HPFA on welded beam design problem (see Appendix I), WCA [28], CPSO [31], and GA3 [40] were employed for comparison. Table 26 gives the best results obtained by WCA, HPFA, CPSO, and GA3. The comparison of obtained statistical results for HPFA with previous studies including WCA, CPSO, and GA3 is presented in Table 27. As shown in Table 27, HPFA obtained the best mean value in

22,000 function evaluations which is superior to WCA, CPSO, and GA3.

8. Conclusion

The strategy of hybridizing heterogeneous biological-inspired algorithms can avoid the shortcomings of single algorithm because of increasing the individual information exchange. This paper proposed a hybrid pathfinder algorithm (HPFA), in which the mutation operator in DE is introduced. To validate the performance of HPFA, abundant experiments on twenty-four unconstrained benchmark functions compared with PFA, CPSO, PSO, and DE are carried out. The numerical experimental results show that HPFA has a good optimizing ability on most benchmark functions and outperforms the original PFA and the other comparison algorithms. Then HFPA is used for data clustering. Real datasets selected from the UCI machine learning repository are used. The experimental results show that the proposed HPFA got better results than the other comparison algorithms on the four datasets.

Then HPFA is employed to solve four constrained benchmark problems and five engineering design problems. The experiment results show that HPFA obtained better solutions than the other comparison algorithms with less function evaluations on most problems. It proves that HPFA is an effective method for solving constrained problems. However, HPFA will still trap in local minimum on a few functions, which can be seen from the benchmark functions. Finding the features of functions which HPFA works not well on and improving the algorithm in solving these functions are the future work.

Appendix

A. Constrained Problem 1

f(x) = [([x.sub.1] - 10).sup.2] + 5[([x.sub.2] - 12).sup.2] + [x.sup.4.sub.3] + 3[([x.sub.4] - 11).sup.2] + 10[x.sup.6.sub.5] + 7[x.sup.2.sub.6] + [x.sup.4.sub.7] - 4[x.sub.6][x.sub.7] - 10[x.sub.6] - 8[x.sub.7], (A.1)

subject to

[mathematical expression not reproducible]. (A.2)

B. Constrained Problem 2

f(x) = 5.3578547[x.sup.3.sub.3] + 0.8356891[x.sub.1][x.sub.5] + 37.293239[x.sub.1] + 40729.141, (B.1)

subject to

[mathematical expression not reproducible]. (B.2)

C. Constrained Problem 3

[mathematical expression not reproducible], (C.1)

subject to

h(x) = [n.summation over (i=1)] [x.sup.2.sub.i] = 1, 0 [less than or equal to] [x.sub.i] [less than or equal to] 1, i = 1, ..., n. (C.2)

D. Constrained Problem 4

f(x) = [100 - [([x.sub.1] - 5).sup.2] - [([x.sub.2] - 5).sup.2] - [([x.sub.3] - 5).sup.2]]/100 (D.1)

subject to

g(x) = [([x.sub.1] - p).sup.2] + [([x.sub.2] - q).sup.2] + [([x.sub.3] - r).sup.2] - 0.0625 [less than or equal to] 0, 0 [less than or equal to] [x.sub.i] [less than or equal to] 10i = 1,2,3 p,q,r = 1,2,3, ..., 9. (D.2)

E. Three-Bar Truss Design Problem

f(x) = (2[square root of 2] [x.sub.1] + [x.sub.2]) * l, (E.1)

subject to

[mathematical expression not reproducible] (E.2)

F. Speed Reducer Design Problem

[mathematical expression not reproducible], (F.1)

subject to

[mathematical expression not reproducible], (F.2)

where

[mathematical expression not reproducible]. (F.3)

G. Pressure Vessel Design Problem

f(x) = 0.6224[x.sub.1][x.sub.3][x.sub.4] + 1.7781[x.sub.2][x.sup.2.sub.3] + 3.1661[x.sup.2.sub.1][x.sub.4] + 19.84[x.sup.2.sub.1][x.sub.3], (G.1)

subject to

[g.sub.1](x) = -[x.sub.1] + 0.0193x [less than or equal to] 0, [g.sub.2](x) = -[x.sub.2] + 0.00954[x.sub.3] [less than or equal to] 0, [g.sub.3](x) = -[pi][x.sup.2.sub.3][x.sub.4] - [4/3[pi][x.sup.3.sub.3]] + 1296,000 [less than or equal to] 0, [g.sub.4](x) = [x.sub.4] - 240 [less than or equal to] 0, 0 [less than or equal to] [x.sub.i] [less than or equal to] 100, i = 1, 2, 10 [less than or equal to] [x.sub.i] [less than or equal to] 200, i = 3,4. (G.2)

H. Tension/Compression Spring Design Problem

f(x) = ([x.sub.3] + 2)[x.sub.2] [x.sup.2.sub.1], (H.1)

subject to

[mathematical expression not reproducible]. (H.2)

I. Welded Beam Design Problem

f(x) = 1.10471[x.sup.2.sub.1][x.sub.2] + 0.04811[x.sub.3][x.sub.4] (14 + [x.sub.2]), (I.1)

subject to

[mathematical expression not reproducible]. (I.2)

where,

[mathematical expression not reproducible]. (I.3)

https://doi.org/10.1155/2020/5787642

Data Availability

Data for clustering in this study have been taken from the UCI machine learning repository (http://archive.ics.uci.edu/ ml/index.php). Data are provided freely for academic research purposes only.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Key R&D Program of China (2018YFB1700103).

References

[1] D. Karaboga, "An idea based on honey bee swarm for numerical optimization," Tech. Rep. TR06, Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey, 2005.

[2] D. Karaboga and B. Akay, "A comparative study of artificial bee colony algorithm," Applied Mathematics and Computation, vol. 214, no. 1, pp. 108-132, 2009.

[3] C. Zhang, D. Ouyang, and J. Ning, "An artificial bee colony approach for clustering," Expert Systems with Applications, vol. 37, no. 7, pp. 4761-4767, 2010.

[4] H. Badem, A. Basturk, A. Caliskan, and M. E. Yuksel, "A new efficient training strategy for deep neural networks by hybridization of artificial bee colony and limited-memory BFGS optimization algorithms," Neurocomputing, vol. 266, pp. 506-526, 2017.

[5] X.-S. Yang, "Swarm intelligence based algorithms: a critical analysis," Evolutionary Intelligence, vol. 7, no. 1, pp. 17-28, 2014.

[6] X. Qi, Y. Zhu, and H. Zhang, "A new meta-heuristic butterfly-inspired algorithm," Journal of Computational Science, vol. 23, pp. 226-239, 2017.

[7] D. H. Wolpert and W. G. Macready, "No free lunch theorems for optimization," IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67-82, 1997.

[8] W. Xiang, S. Ma, and M. An, "hABCDE: a hybrid evolutionary algorithm based on artificial bee colony algorithm and differential evolution," Applied Mathematics and Computation, vol. 238, pp. 370-386, 2014.

[9] G. Wang and L. Guo, "A novel hybrid bat algorithm with harmony search for global numerical optimization," Journal of Applied Mathematics, vol. 2013, Article ID 696491, 21 pages, 2013.

[10] R.-E. Precup and R.-C. David, Nature-Inspired Optimization Algorithms for Fuzzy Controlled Servo Systems, Butterworth-Heinemann, Oxford, UK, 2019.

[11] T. O. Ting, X. S. Yang, S. Cheng, and K. Z. Huang, "Hybrid metaheuristic algorithms: past, present, and future," Recent Advances in Swarm Intelligence and Evolutionary Computation, pp. 71-83, Springer International Publishing, Cham, Switzerland, 2015.

[12] H. Yapici and N. Cetinkaya, "A new meta-heuristic optimizer: pathfinder algorithm," Applied Soft Computing, vol. 78, pp. 545-568, 2019.

[13] R. Storn and K. Price, "Differential evolution--a simple and efficient heuristic for global optimization over continuous spaces," Journal of Global Optimization, vol. 11, no. 4, pp. 341-359, 1997.

[14] W. Y. Gong, Z. H. Cai, and C. X. Ling, "DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization," Soft Computing, vol. 15, no. 4, pp. 645-665, 2010.

[15] R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, "Differential evolution algorithm with ensemble of parameters and mutation strategies," Applied Soft Computing, vol. 11, no. 2, pp. 1679-1696, 2011.

[16] G. Beruvides, R. Quiza, and R. E. Haber, "Multi-objective optimization based on an improved cross-entropy method. A case study of a micro-scale manufacturing process," Information Sciences, vol. 334-335, pp. 161-173, 2016.

[17] B. Niu, Y. Zhu, X. He, and H. Wu, "MCPSO: a multi-swarm cooperative particle swarm optimizer," Applied Mathematics and Computation, vol. 185, no. 2, pp. 1050-1062, 2007.

[18] J. J. Liang, B. Y. Qu, P. N. Suganthan, and A. G. Hernandez-Diaz, "Problem definitions and evaluation criteria for the CEC 2013 special session and competition on real-parameter optimization," Technical Report, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China, 2012.

[19] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions," IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281-295, 2006.

[20] R. Salomon, "Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms," Biosystems, vol. 39, no. 3, pp. 263-278, 1996.

[21] A. Tharwat and A. E. Hassanien, "Quantum-behaved particle swarm optimization for parameter optimization of support vector machine," Journal of Classification, vol. 36, no. 3, pp. 576-598, 2019.

[22] I. la Fe-Perdomo, G. Beruvides, R. Quiza, R. Haber, and M. Rivas, "Automatic selection of optimal parameters based on simple soft-computing methods: a case study of micro-milling processes," IEEE Transactions on Industrial Informatics, vol. 15, no. 2, pp. 800-811, 2019.

[23] M. Clerc and J. Kennedy, "The particle swarm--explosion, stability, and convergence in a multidimensional complex space," IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58-73, 2002.

[24] F. vandenBergh and A. P. Engelbrecht, "A cooperative approach to particle swarm optimization," IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225-239, 2004.

[25] S. Garcia, A. Fernandez, J. Luengo, and F. Herrera, "Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power," Information Sciences, vol. 180, no. 10, pp. 2044-2064, 2010.

[26] P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay, "Clustering large graphs via the singular value decomposition," Machine Learn, vol. 56, no. 1-3, pp. 9-33, 1999.

[27] UCI Repository of Machine Learning Databases, 2019, http:// archive.ics.uci.edu/ml/index.php.

[28] H. Eskandar, A. Sadollah, A. Bahreininejad, and M. Hamdi, "Water cycle algorithm--a novel metaheuristic optimization method for solving constrained engineering optimization problems," Computers & Structures, vol. 110-111, pp. 151-166, 2012.

[29] K.-Z. Tang, T.-K. Sun, and J.-Y. Yang, "An improved genetic algorithm based on a novel selection strategy for nonlinear programming problems," Computers & Chemical Engineering, vol. 35, no. 4, pp. 615-621, 2011.

[30] H. Liu, Z. Cai, and Y. Wang, "Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization," Applied Soft Computing, vol. 10, no. 2, pp. 629-640, 2010.

[31] R. A. Krohling and L. dos Santos Coelho, "Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems," IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol. 36, no. 6, pp. 1407-1416, 2006.

[32] F.-Z. Huang, L. Wang, and Q. He, "An effective co-evolutionary differential evolution for constrained optimization," Applied Mathematics and Computation, vol. 186, no. 1, pp. 340-356, 2007.

[33] Z. Michalewicz, "Genetic algorithms, numerical optimization, and constraints," in Proceedings of the Sixth International Conference on Genetic Algorithms, Morgan Kauffman, Pittsburgh, PA, USA, pp. 151-158, 1995.

[34] Q. He and L. Wang, "A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization," Applied Mathematics and Computation, vol. 186, no. 2, pp. 1407-1422, 2007.

[35] J. Lampinen, "A constraint handling approach for the differential evolution algorithm," in Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600), vol. 2, pp. 1468-1473, Honolulu, HI, USA, May 2002.

[36] R. Becerra and C. A. C. Coello, "Cultured differential evolution for constrained optimization," Computer Methods in Applied Mechanics and Engineering, vol. 195, no. 33-36, pp. 4303-4322, 2006.

[37] A. E. M. Zavala, A. H. Aguirre, and E. R. V. Diharce, "Constrained optimization via evolutionary swarm optimization algorithm (PESO)," in Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, pp. 209-216, New York, NY, USA, 2005.

[38] R. V. Rao, V. J. Savsani, and D. P. Vakharia, "Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems," Computer-Aided Design, vol. 43, no. 3, pp. 303-315, 2011.

[39] Y. Wang, Z. Cai, Y. Zhou, and Z. Fan, "Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique," Structural and Multidisciplinary Optimization, vol. 37, no. 4, pp. 395-413, 2009.

[40] C. A. C. Coello and E. M. Montes, "Constraint-handling in genetic algorithms through the use of dominance-based tournament selection," Advanced Engineering Informatics, vol. 16, no. 3, pp. 193-203, 2002.

Xiangbo Qi [ID], (1) Zhonghu Yuan, (1) and Yan Song [ID] (2)

(1) School of Mechanical Engineering, Shenyang University, Shenyang 110044, China

(2) School of Physics, Liaoning University, Shenyang 110036, China

Correspondence should be addressed to Xiangbo Qi; ustcdragon@syu.edu.cn and Yan Song; profsongyan@126.com

Received 22 January 2020; Revised 21 April 2020; Accepted 12 May 2020; Published 29 May 2020

Academic Editor: Rodolfo E. Haber

Caption: Figure 1: The mean best function value profiles of HPFA, PFA, DE, PSO, and CPSO. (a) ([f.sub.1]), (b) ([f.sub.2]), (c) ([f.sub.3]), (d) ([f.sub.4]), (e) ([f.sub.5]), (f) ([f.sub.6]), (g) ([f.sub.7]), (h) ([f.sub.8]), (i) ([f.sub.9]), (j) ([f.sub.10]), (k) ([f.sub.11]), (l) ([f.sub.12]), (m) ([f.sub.13]), (n) ([f.sub.14]), (o) ([f.sub.15]), (p) ([f.sub.16]), (q) [f.sub.17]), (r) ([f.sub.18]), (s) ([f.sub.19]), (t) ([f.sub.20]), (u) ([f.sub.21]), (v) ([f.sub.22]), (w) ([f.sub.23]), and (x) ([f.sub.24]).

Caption: Figure 2: Computing time of all algorithms on different problems.

Caption: Figure 3: The mean minimum total within-cluster variance profiles of HPFA, PFA, CPSO, and PSO. (a) Glass data, (b) Wine data, (c) Iris data, and (d) LD data.
Table 1: Benchmark functions.

Name                                            Function

Sphere ([f.sub.1])               [f.sub.1] = [[summation].sup.D.sub.
                                    i=1][x.sup.2.sub.i]
Rosenbrock ([f.sub.2])           [f.sub.2] = [[summation].sup.D.sub.
                                    i=1] (100 [([x.sup.2.sub.i] -
                                    [x.sub.i+1]).sup.2] + [(1 -
                                    [x.sub.i]).sup.2]
Quadric ([f.sub.3])              [f.sub.3] = [[summation].sup.D.sub.
                                    i=1] [([[summation].sup.i.sub.j=1]
                                    [x.sub.j]).sup.2]
                                 [f.sub.m] = 10 [sin.sup.2] [pi]
                                    [x.sub.1] + [[summation].sup.D-1.
                                    sub.i=1] [([x.sub.i] - 1).sup.2]
                                    (1 + 10 [sin.sup.2] [pi]
                                    [x.sub.i+1])
Sinproblem ([f.sub.4])           [f.sub.4] = [pi]/D {[f.sub.m] +
                                    [([x.sub.D] - 1).sup.2}
Sumsquares ([f.sub.5])           [f.sub.5] = [[summation].sup.D.
                                    sub.i=1]i[x.sup.2.sub.i]
Zakharov ([f.sub.6])             [f.sub.6] = [[summation].sup.D.sub.
                                    i=1][x.sup.2.sub.i] +
                                    [([[summation].sup.D.sub.i=1]
                                    0.5i[x.sub.i]).sup.2] +
                                    [([[summation].sup.D.sub.i=1]
                                    0.5i[x.sub.i]).sup.4]
Powers ([f.sub.7])               [f.sub.7] = [[summation].sup.D.sub.
                                    i=1] [absolute value of [x.sub.i]].
                                    sup.i+1]
Schwefel2.22 ([f.sub.8])         [f.sub.8] = [[summation].sup.D.sub.
                                    i=1] [absolute value of [x.sub.i]]
                                    + [[product].sup.D.sub.i=1]
                                    [absolute value of [x.sub.i]]
Rastrigin ([f.sub.9])            [f.sub.9] = [[summation].sup.D.sub.
                                    i=1] ([x.sup.2.sub.i] - 10
                                    cos(2[pi][x.sub.i]) + 10
Schwefel ([f.sub.10])            [f.sub.10] = D * 418.9829 +
                                    [[summation].sup.D.sub.i=1] -
                                    [x.sub.i] sin ([square root of
                                    [absolute value of [x.sub.i]]])
                                 [f.sub.m] = 20 + e - 20exp(-0.2
                                    [square root of ((1/D)
                                    [[summation].sup.D.sub.i=1]
                                    [x.sup.2.sub.i]))]
Ackley ([f.sub.11])              [f.sub.11] = [f.sub.m] - exp((1/D)
                                    [[summation].sup.D.sub.sub.i=1]
                                    cos(2[pi][x.sub.i]))
Griewank ([f.sub.12])            [f.sub.12] = 1/4000 ([[summation].
                                    sup.D.sub.i=1] [x.sup.2.sub.i]) -
                                    ([[product].sup.D.sub.i=1] cos
                                    ([x.sub.i]/[square root of i])) + 1
Rot_rastrigin ([f.sub.13])       [f.sub.13] = [f.sub.5](y),y = M * x
Rot_schwefel ([f.sub.14])        [f.sub.14] = [f.sub.6](y),y = M * x
Rot_ackley ([f.sub.15])          [f.sub.15] = [f.sub.7](y),y = M * x
Rot_griewank ([f.sub.16])        [f.sub.16] = [f.sub.8](y),y = M * x
Composition Function 1 (n = 5,
   rotated) ([f.sub.17])
Composition Function 2 (n = 3,
   unrotated) ([f.sub.18])
Composition Function 3 (n = 3,
   rotated) ([f.sub.19])
Composition Function 4 (n = 3,
   rotated) ([f.sub.20])
Composition Function 5 (n = 3,
   rotated) ([f.sub.21])
Composition Function 6 (n = 5,
   rotated) ([f.sub.22])
Composition Function 7 (n = 5,
   rotated) ([f.sub.23])
Composition Function 8 (n = 5,
   rotated) ([f.sub.24])

Name                                       Limits

Sphere ([f.sub.1])                 [[-5.12,5. 12].sup.D]

Rosenbrock ([f.sub.2])               [[-15, 15].sup.D]

Quadric ([f.sub.3])                   [[-10,10].sup.D]

Sinproblem ([f.sub.4])                [[-10,10].sup.D]

Sumsquares ([f.sub.5])                [[-10,10].sup.D]

Zakharov ([f.sub.6])                  [[-5,10].sup.D]

Powers ([f.sub.7])                     [[-1,1].sup.D]

Schwefel2.22 ([f.sub.8])              [[-10,10].sup.D]

Rastrigin ([f.sub.9])                [[-15, 15].sup.D]

Schwefel ([f.sub.10])               [[-500, 500].sup.D]

Ackley ([f.sub.11])              [[-32.768, 32.768].sup.D]

Griewank ([f.sub.12])               [[-600, 600].sup.D]

Rot_rastrigin ([f.sub.13])           [[-15, 15].sup.D]
Rot_schwefel ([f.sub.14])           [[-500, 500].sup.D]
Rot_ackley ([f.sub.15])          [[-32.768, 32.768].sup.D]
Rot_griewank ([f.sub.16])           [[-600, 600].sup.D]
Composition Function 1 (n = 5,       [[-100,100].sup.D]
   rotated) ([f.sub.17])
Composition Function 2 (n = 3,       [[-100,100].sup.D]
   unrotated) ([f.sub.18])
Composition Function 3 (n = 3,       [[-100,100].sup.D]
   rotated) ([f.sub.19])
Composition Function 4 (n = 3,       [[-100,100].sup.D]
   rotated) ([f.sub.20])
Composition Function 5 (n = 3,       [[-100,100].sup.D]
   rotated) ([f.sub.21])
Composition Function 6 (n = 5,       [[-100,100].sup.D]
   rotated) ([f.sub.22])
Composition Function 7 (n = 5,       [[-100,100].sup.D]
   rotated) ([f.sub.23])
Composition Function 8 (n = 5,       [[-100,100].sup.D]
   rotated) ([f.sub.24])

Table 2: Results of HPFA on Sphere with different CR and F.

CR           F            Mean               Std

0.1         0.1      1.804934E - 41     4.306427E - 41
0.5         0.1      6.469129E - 52     9.396465E - 52
0.9#       0.1#     2.675215E - 56#    7.790529E - 56#
0.1         0.5      4.133484E - 37     9.188120E - 37
0.5         0.5      4.453667E - 39     7.464029E - 39
0.9         0.5      3.829022E - 38     6.050527E - 38
0.1         0.7      1.657355E - 35     3.454283E - 35
0.5         0.7      1.317298E - 32     4.014582E - 32
0.9         0.7      3.390563E - 31     1.259625E - 30

CR              Best              Worst

0.1        2.744826E - 45     1.913165E - 40
0.5        3.918891E - 54     3.663762E - 51
0.9#      1.508425E - 58#    3.537206E - 55#
0.1        2.575105E - 39     3.202340E - 36
0.5        6.254230E - 42     2.303252E - 38
0.9        4.375602E - 41     2.285076E - 37
0.1        8.047479E - 39     1.298767E - 34
0.5        2.297207E - 36     1.744416E - 31
0.9        3.333444E - 34     5.672141E - 30

Values in bold represent the best results.

Note: Values in bold represent the best results are indicated with #.

Table 3: Results of HPFA on Zakharov with different CR and F.

CR        F           Mean               Std                Best

0.1      0.1     8.997799E - 05     1.962703E - 04     8.098559E - 06
0.5      0.1     4.735121E - 06     6.379299E - 06     2.419542E - 08
0.9#    0.1#    1.525978E - 07#    2.914531E - 07#    1.794753E - 09#
0.1      0.5     1.221220E - 04     1.087968E - 04     6.458847E - 06
0.5      0.5     6.305502E - 04     6.049323E - 04     5.763775E - 05
0.9      0.5     5.562315E - 04     4.954291E - 04     4.126934E - 05
0.1      0.7     8.458769E - 04     8.987164E - 04     5.839103E - 05
0.5      0.7     2.174830E - 03     2.124124E - 03     2.583813E - 04
0.9      0.7     2.343529E - 03     3.139729E - 03     2.687660E - 04

CR           Worst

0.1      8.982455E - 04
0.5      2.150716E - 05
0.9#    1.294071E - 06#
0.1      3.542003E - 04
0.5      2.276017E - 03
0.9      1.797681E - 03
0.1      3.582266E - 03
0.5      7.311997E - 03
0.9      1.161894E - 02

Values in bold represent the best results.

Note: Values in bold represent the best results are indicated
with #.

Table 4: Results of HPFA on Sumsquares with different CR and F.

CR        F           Mean               Std                Best

0.1      0.1     2.730851E - 39     5.758742E - 39     2.924781E - 43
0.5      0.1     2.770970E - 49     9.552035E - 49     1.926855E - 52
0.9#    0.1#    3.816930E - 53#    1.447266E - 52#    1.874149E - 56#
0.1      0.5     5.870955E - 35     9.623223E - 35     2.036393E - 39
0.5      0.5     7.252112E - 37     1.091354E - 36     6.302606E - 40
0.9      0.5     1.835751E - 35     5.192652E - 35     6.229899E - 39
0.1      0.7     3.693683E - 32     1.376861E - 31     7.899420E - 37
0.5      0.7     1.028391E - 30     1.761916E - 30     8.740890E - 34
0.9      0.7     6.655358E - 30     1.166885E - 29     1.417114E - 33

CR           Worst

0.1      2.605756E - 38
0.5      4.282311E - 48
0.9#    6.516979E - 52#
0.1      3.519882E - 34
0.5      3.513496E - 36
0.9      2.334590E - 34
0.1      6.173252E - 31
0.5      6.752688E - 30
0.9      4.895754E - 29

Values in bold represent the best results.

Note: Values in bold represent the best results are indicated
with #.

Table 5: Results of HPFA on Quadric with different CR and F.

CR        F           Mean               Std                Best

0.1      0.1     2.843327E - 06     3.049119E - 06     1.156593E - 07
0.5      0.1     4.829416E - 07     7.158196E - 07     1.046789E - 08
0.9#    0.1#    1.872879E - 08#    2.807535E - 08#    4.203582E - 10#
0.1      0.5     8.270814E - 06     8.651775E - 06     1.341972E - 06
0.5      0.5     2.117334E - 05     3.313434E - 05     3.112510E - 07
0.9      0.5     4.016124E - 06     6.668875E - 06     1.364465E - 07
0.1      0.7     1.837195E - 05     3.350088E - 05     2.420329E - 07
0.5      0.7     3.035135E - 05     4.375764E - 05     3.853599E - 07
0.9      0.7     3.848983E - 05     4.027965E - 05     1.977449E - 06

CR           Worst

0.1      1.152089E - 05
0.5      3.167360E - 06
0.9#    1.287137E - 07#
0.1      3.780385E - 05
0.5      1.115489E - 04
0.9      3.043102E - 05
0.1      1.536438E - 04
0.5      1.721490E - 04
0.9      1.598706E - 04

Values in bold represent the best results.

Note: Values in bold represent the best results are indicated
with #.

Table 6: Result comparison of different optimal algorithms with a
dimension of 30.

Function                   HPFA             PFA              CPSO

              Mean    2.87789E - 39    9.96648E - 35    1.03012E - 05
              Std     4.69565E - 39    2.46385E - 34    1.23861E - 05
[f.sub.1]     Best    6.43021E - 42    1.33831E - 38    2.30640E - 06
             Worst    2.06766E - 38    1.17598E - 33    6.50238E - 05
              Rank          1                2                4
              Mean    2.91086E + 01    1.80802E + 01     1.39096E +01
              Std     2.07275E + 01    5.62894E + 00     2.53192E +01
[f.sub.2]     Best    1.55315E + 01    3.11018E - 01    2.62037E - 01
             Worst    7.62099E + 01    2.68566E + 01     7.22219E +01
              Rank          3                2                1
              Mean    1.75728E - 02    6.04205E - 04    7.34680E + 01
              Std     1.15332E - 02    4.59306E - 04    3.63038E + 01
[f.sub.3]     Best    3.93048E - 03    9.02740E - 05    2.52930E + 01
             Worst    4.89242E - 02    1.80294E - 03    1.78108E + 02
              Rank          2                1                4
              Mean    1.59636E - 32    8.29261E - 02    5.94602E - 04
              Std     6.25035E - 34    1.73018E - 01    4.23066E - 04
[f.sub.4]     Best    1.57055E - 32    6.08824E - 32    1.81251E - 04
             Worst    1.82870E - 32    6.21900E - 01    1.61115E - 03
              Rank          1                4                3
              Mean    2.15329E - 36    3.79305E - 31    6.94271E - 04
              Std     6.51598E - 36    1.29899E - 30    4.38126E - 04
[f.sub.5]     Best    1.20023E - 38    3.34744E - 35    1.99516E - 04
             Worst    3.44935E - 35    6.72983E - 30    1.99485E - 03
              Rank          1                2                4
              Mean    1.16026E + 00    1.84604E - 01    3.04128E + 02
              Std     5.79662E - 01    1.98144E - 01    7.54896E + 01
[f.sub.6]     Best    1.65827E - 01    6.50787E - 03    1.79068E + 02
             Worst    2.50218E + 00    8.10651E - 01    4.89106E + 02
              Rank          2                1                5
              Mean    1.14655E - 11    3.49179E - 10    3.65726E - 09
              Std     2.19632E - 11    3.45822E - 10    6.05869E - 09
[f.sub.7]     Best    2.82583E - 18    3.13738E - 11    1.06366E - 12
             Worst    8.56564E - 11    1.35814E - 09    2.62745E - 08
              Rank          3                4                5
              Mean    2.81761E - 22    1.00686E - 17    2.11152E - 02
              Std     3.17278E - 22    2.17703E - 17    5.45981E - 03
[f.sub.8]     Best    1.44549E - 23    3.75280E - 19    1.30870E - 02
             Worst    1.34912E - 21    1.18525E - 16    3.44021E - 02
              Rank          1                2                4
              Mean    3.11760E + 00    7.85020E + 01    1.67718E - 02
              Std     1.80524E + 00    2.52468E + 01    8.68494E - 03
[f.sub.9]     Best    8.90905E - 05    2.98487E + 01    8.97848E - 03
             Worst    6.96471E + 00    1.51233E + 02    4.38635E - 02
              Rank          3                5                1
              Mean    1.30333E + 02    3.70867E + 03    4.19844E - 03
              Std     1.65718E + 02    7.73401E + 02    1.59645E - 03
[f.sub.10]    Best    3.81827E - 04    2.27312E + 03    1.74911E - 03
             Worst    5.92192E + 02    5.17192E + 03    8.19982E - 03
              Rank          3                4                1
              Mean    7.87518E - 15    3.66167E - 01    1.85383E - 02
              Std     6.48634E - 16    7.03922E - 01    4.72574E - 03
[f.sub.11]    Best    4.44089E - 15    7.99361E - 15    1.02247E - 02
             Worst    7.99361E - 15    2.40831E + 00    2.85422E - 02
              Rank          1                4                3
              Mean    7.39604E - 04    6.89544E - 03    2.54495E - 02
              Std     2.25674E - 03    7.89942E - 03    4.01058E - 02
[f.sub.12]    Best    0.00000E + 00    0.00000E + 00    2.25373E - 03
             Worst    7.39604E - 03    2.70370E - 02    1.59378E - 01
              Rank          2                3                4
              Mean     2.95051E +01    8.19844E + 01    1.48734E + 02
              Std      1.17090E +01    3.17715E + 01    3.99671E + 01
[f.sub.13]    Best     1.29345E +01    4.07933E + 01    8.25970E + 01
             Worst     5.73477E +01    1.90036E + 02    2.31836E + 02
              Rank          1                2                5
              Mean     2.34246E +03    2.66915E + 03    4.34728E + 03
              Std      6.89972E +02    7.51855E + 02    7.16422E + 02
[f.sub.14]    Best     1.08765E +03    1.48280E + 03    3.13281E + 03
             Worst     3.75773E +03    4.16221E + 03    5.66909E + 03
              Rank          1                2                4
              Mean    7.75676E - 15    1.08189E + 00    4.92103E + 00
              Std     9.01352E - 16    8.67751E - 01    6.37440E + 00
[f.sub.15]    Best    4.44089E - 15    7.99361E - 15    4.19255E - 02
             Worst    7.99361E - 15    2.57954E + 00    1.93777E + 01
              Rank          1                3                5
              Mean    1.31463E - 03    7.79994E - 03    5.19982E - 02
              Std     3.46976E - 03    8.35854E - 03    4.49126E - 02
[f.sub.16]    Best     0.00000E +00    0.00000E + 00    5.47885E - 03
             Worst    1.23210E - 02    2.70517E - 02    1.76100E - 01
              Rank          1                2                5
              Mean    9.94354E + 02    9.97257E + 02     9.87240E +02
              Std     6.45565E + 01    7.93940E + 01    8.18461E + 01
[f.sub.17]    Best    9.00000E + 02    9.00000E + 02     8.02234E +02
             Worst    1.14354E + 03    1.14354E + 03    1.14355E + 03
              Rank          2                3                1
              Mean    2.16969E + 03    4.27549E + 03     9.23935E +02
              Std     5.31124E + 02    6.06241E + 02     6.72922E +01
[f.sub.18]    Best    1.52934E + 03    3.04609E + 03    8.00563E + 02
             Worst    3.71231E + 03    5.73560E + 03    1.03826E + 03
              Rank          3                4                1
              Mean    6.49831E + 03    5.86482E + 03    7.17359E + 03
              Std     8.70510E + 02    5.57598E + 02    9.53474E + 02
[f.sub.19]    Best    4.52076E + 03    4.60223E + 03    5.58118E + 03
             Worst    7.85971E + 03    6.85501E + 03    8.76360E + 03
              Rank          3                1                4
              Mean     1.24508E +03    1.26066E + 03    1.31885E + 03
              Std      8.94977E +00    1.29159E + 01    1.40407E + 01
[f.sub.20]    Best     1.22474E +03    1.23950E + 03    1.28952E + 03
             Worst     1.26015E +03    1.29628E + 03    1.35574E + 03
              Rank          1                2                5
              Mean     1.35640E +03    1.37460E + 03    1.43436E + 03
              Std      7.04293E +00    1.08266E + 01    1.53275E + 01
[f.sub.21]    Best     1.33796E +03    1.35580E + 03    1.40858E + 03
             Worst     1.36691E +03    1.39966E + 03    1.47166E + 03
              Rank          1                2                5
              Mean    1.47852E + 03    1.49776E + 03    1.55974E + 03
              Std     6.98737E + 01    7.59405E + 01    8.14332E + 01
[f.sub.22]    Best    1.40006E + 03    1.40007E + 03    1.40033E + 03
             Worst    1.54609E + 03    1.57282E + 03    1.61942E + 03
              Rank          2                3                5
              Mean     2.02871E +03    2.17259E + 03    2.65148E + 03
              Std      7.62593E +01    1.12942E + 02    1.02039E + 02
[f.sub.23]    Best     1.88030E +03    1.97638E + 03    2.34650E + 03
             Worst     2.16629E +03    2.41923E + 03    2.79679E + 03
              Rank          1                2                5
              Mean     1.70000E +03    1.78188E + 03    3.65173E + 03
              Std     3.32458E - 13    3.11801E + 02    1.46943E + 03
[f.sub.24]    Best     1.70000E +03    1.70000E + 03    1.70448E + 03
             Worst     1.70000E +03    2.97117E + 03    7.92326E + 03
              Rank          1                3                5

Function                   PSO               DE

              Mean    1.10504E - 05    2.63694E - 09
              Std     7.87840E - 06    7.52717E - 10
[f.sub.1]     Best    1.11744E - 06    1.49876E - 09
             Worst    2.92489E - 05    4.95148E - 09
              Rank          5                3
              Mean    2.97779E + 01    4.07529E + 01
              Std     1.24243E + 01    1.34505E + 01
[f.sub.2]     Best    2.26400E + 01    2.89945E + 01
             Worst    9.48405E + 01    9.28212E + 01
              Rank          4                5
              Mean    3.49305E - 02    1.62253E + 02
              Std     2.55524E - 02    2.26821E + 01
[f.sub.3]     Best    8.92958E - 03    1.06934E + 02
             Worst    1.21832E - 01    1.98620E + 02
              Rank          3                5
              Mean    6.83859E - 01    4.51854E - 08
              Std     7.90339E - 01    1.41403E - 08
[f.sub.4]     Best    5.30289E - 04    2.13220E - 08
             Worst    2.64677E + 00    8.21182E - 08
              Rank          5                2
              Mean    2.70777E - 03    1.41395E - 07
              Std     1.78871E - 03    3.74697E - 08
[f.sub.5]     Best    4.14323E - 04    6.89551E - 08
             Worst    7.01576E - 03    2.30300E - 07
              Rank          5                3
              Mean    4.64371E + 01    1.49977E + 02
              Std     7.99384E + 01    2.08215E + 01
[f.sub.6]     Best    1.29523E - 02    1.18058E + 02
             Worst    3.09935E + 02    1.91642E + 02
              Rank          3                4
              Mean    1.26347E - 16    1.47712E - 35
              Std     2.33999E - 16    2.28754E - 35
[f.sub.7]     Best    6.12039E - 19    3.24107E - 37
             Worst    1.05378E - 15    1.04388E - 34
              Rank          2                1
              Mean    2.62070E - 01    9.04675E - 05
              Std     2.26377E - 01    1.35825E - 05
[f.sub.8]     Best    6.06860E - 02    6.44440E - 05
             Worst    1.12460E + 00    1.17674E - 04
              Rank          5                3
              Mean    6.84768E + 01    1.17019E - 01
              Std     1.56707E + 01    7.47997E - 02
[f.sub.9]     Best    3.18446E + 01    1.33573E - 02
             Worst    9.68780E + 01    3.11367E - 01
              Rank          4                2
              Mean    6.11047E + 03    7.89627E + 00
              Std     7.25163E + 02    3.00488E + 01
[f.sub.10]    Best    4.38253E + 03    3.82429E - 04
             Worst    7.32447E + 03    1.18439E + 02
              Rank          5                2
              Mean    2.10399E + 00    2.98635E - 04
              Std     4.45303E - 01    4.19896E - 05
[f.sub.11]    Best    1.15607E + 00    2.35379E - 04
             Worst    3.46435E + 00    4.22172E - 04
              Rank          5                2
              Mean    3.42781E - 02    1.79665E - 05
              Std     1.67595E - 02    1.46270E - 05
[f.sub.12]    Best    1.14047E - 02    4.39881E - 06
             Worst    9.13585E - 02    6.22646E - 05
              Rank          5                1
              Mean    8.36924E + 01    1.33984E + 02
              Std     1.83198E + 01    8.84993E + 00
[f.sub.13]    Best    5.27381E + 01    1.11286E + 02
             Worst    1.38309E + 02    1.47944E + 02
              Rank          3                4
              Mean    5.88264E + 03    3.56508E + 03
              Std     8.71985E + 02    3.36531E + 02
[f.sub.14]    Best    4.61162E + 03    2.74852E + 03
             Worst    7.97547E + 03    4.12042E + 03
              Rank          5                3
              Mean    2.63458E + 00    1.30715E - 01
              Std     6.36446E - 01    8.99981E - 02
[f.sub.15]    Best    1.34065E + 00    3.94788E - 02
             Worst    4.46577E + 00    4.41820E - 01
              Rank          4                2
              Mean    3.83314E - 02    1.03911E - 02
              Std     2.35712E - 02    6.99527E - 03
[f.sub.16]    Best    6.25370E - 03    1.97137E - 03
             Worst    1.15044E - 01    2.79564E - 02
              Rank          4                3
              Mean    9.98219E + 02    1.02307E + 03
              Std     8.71585E + 01    9.06629E + 01
[f.sub.17]    Best    8.02117E + 02    9.00224E + 02
             Worst    1.14355E + 03    1.15475E + 03
              Rank          4                5
              Mean    5.68722E + 03    1.75920E + 03
              Std     8.46528E + 02    2.19741E + 02
[f.sub.18]    Best    3.78428E + 03    1.38322E + 03
             Worst    7.46332E + 03    2.24557E + 03
              Rank          5                2
              Mean    6.28350E + 03    7.96975E + 03
              Std     7.93415E + 02    4.27488E + 02
[f.sub.19]    Best    4.29488E + 03    7.13518E + 03
             Worst    7.70667E + 03    8.76090E + 03
              Rank          2                5
              Mean    1.29748E + 03    1.28362E + 03
              Std     1.35512E + 01    4.72764E + 00
[f.sub.20]    Best    1.27569E + 03    1.27268E + 03
             Worst    1.31989E + 03    1.29134E + 03
              Rank          4                3
              Mean    1.42631E + 03    1.39210E + 03
              Std     1.47903E + 01    4.02294E + 00
[f.sub.21]    Best    1.40163E + 03    1.38252E + 03
             Worst    1.45863E + 03    1.39765E + 03
              Rank          4                3
              Mean    1.50353E + 03     1.40407E +03
              Std     8.63327E + 01    9.84591E - 01
[f.sub.22]    Best    1.40002E + 03     1.40219E +03
             Worst    1.58723E + 03     1.40652E +03
              Rank          4                1
              Mean    2.39185E + 03    2.41610E + 03
              Std     1.10303E + 02    5.57424E + 01
[f.sub.23]    Best    2.16373E + 03    2.20448E + 03
             Worst    2.60090E + 03    2.49332E + 03
              Rank          3                4
              Mean    3.10618E + 03    1.70163E + 03
              Std     1.25346E + 03    3.11041E + 00
[f.sub.24]    Best    1.50128E + 03    1.70046E + 03
             Worst    4.85875E + 03    1.71777E + 03
              Rank          4                2

Table 7: Results of the Iman-Davenport test.

Dimension      Iman-      Critical value      Significant
             Davenport     [alpha] = 0.05     differences?

30             11.863        2.45-2.52            Yes

Table 8: Comparison (Holm's test) of HPFA with the remaining
algorithms.

Algorithm        z          p value      [alpha]/i    Significant
                                                      differences?

PSO            5.1121     3.1863E - 7      0.0125         Yes
CPSO           4.3817     1.177EE - 5      0.0167         Yes
DE             2.6473       0.00811        0.025          Yes
PFA            2.0083        0.0446         0.05          Yes

Table 9: Mean total within-cluster variances of HPFA, PFA, CPSO, and
PSO algorithms.

Datasets                HPFA         PFA          PSO          CPSO

             Mean      235.67       238.84       510.58      2438.13
Glass        Std       15.06        13.35        58.75        86.37
             Rank        1            2            3            4
             Mean     16293.27     16293.44     18345.01     17754.60
Wine         Std        0.86         0.83       1017.12      1011.00
             Rank        1            2            4            3
             Mean      97.64        97.69        159.33       161.88
Iris         Std        4.36         5.66        18.25        17.21
             Rank        1            2            3            4
             Mean     9851.84      9851.87      13760.02     15357.00
LD           Std        0.19         0.17       1457.21      1667.36
             Rank        1            2            3            4

Table 10: Comparison of the best solution obtained from the previous
algorithms for constrained problem 1.

DV                IGA              WCA              HPFA

[x.sub.1]       2.330499         2.334238         2.330704
[x.sub.2]       1.951372         1.950249         1.953073
[x.sub.3]      -0.477541        -0.474707        -0.476937
[x.sub.4]       4.365726         4.366854         4.361683
[x.sub.5]      -0.624487        - 0.619911       -0.627155
[x.sub.6]       1.038131       1.030E + 00        1.037494
[x.sub.7]       1.594227       1.595E + 00        1.593695
g1(x)        4.46000E - 05    1.00000E - 13    1.00000E - 04
g2(x)         -252.561723       252.569346       -252.5623
g3(x)         -144.878190       144.897817       -144.8705
g4(x)        7.63000E - 06      2.2E - 12         - 0.0011
f (x)          680.630060       680.631178       680.631176

DV               Optimal
                 solution

[x.sub.1]        2.330499
[x.sub.2]        1.951372
[x.sub.3]       -0.477541
[x.sub.4]        4.365726
[x.sub.5]       -0.624487
[x.sub.6]        1.038131
[x.sub.7]        1.594227
g1(x)         4.46000E - 05
g2(x)           252.561723
g3(x)           -144.87819
g4(x)         7.63000E - 06
f (x)           680.630057

Table 11: Comparison of statistical results obtained from various
algorithms for constrained problem 1.

Methods     Mean            SD            Best        Worst       FEs

HPFA      680.6338    1.83561FE - 03    680.6312    680.6376    100000
WCA       680.6443    1.140000E - 02    680.6311    680.6738    110050
PSO       680.9710    5.100000E - 01    680.6345    684.5289    140100
CPSO-GD   680.7810    1.484000E - 01    680.6780    681.3710      NA
CDE       681.5030          NA          680.7710    685.1440    248000

Table 12: Comparison of the best solution obtained from the previous
algorithms for constrained problem 2.

DV                  GA1                 WCA                HPFA

[x.sub.1]        78.049500           78.000000          78.000000
[x.sub.2]        33.007000           33.000000          33.000000
[x.sub.3]        27.081000           29.995256          29.995256
[x.sub.4]        45.000000           45.000000          45.000000
[x.sub.5]        44.940000           36.775812          36.775813
g1(x)           1.284E + 00         -1.960E - 12            0
g2(x)            -93.283813          -91.999999         -92.000000
g3(x)            -9.592143           -11.159499         -11.159500
g4(x)            -10.407856          -8.840500          -8.840500
g5(x)            -4.998088           -5.000000          -5.000000
g6(x)        1.910000000E - 03           0                  0
f(x)           -31020.859000       -30665.538600      -30665.538600

DV           Optimal solution

[x.sub.1]        78.000000
[x.sub.2]        33.000000
[x.sub.3]        29.995260
[x.sub.4]        45.000000
[x.sub.5]        36.775810
g1(x)          -9.7100E - 04
g2(x)           -92.000000
g3(x)           -11.100000
g4(x)            -8.870000
g5(x)            -5.000000
g6(x)           9.27E - 09
f(x)           -30665.539000

Table 13: Comparison of statistical results obtained from various
algorithms for constrained problem 2.

Methods        Mean              SD              Best

HPFA        -30665.5380    6.365971E - 04     -30665.5386
WCA         -30665.5270    2.180000E - 02     -30665.5386
PSO         -30570.9286    8.100000E + 01     -30663.8563
HPSO        -30665.5390    1.700000E - 06     -30665.5390
PSO-DE      -30665.5387    8.300000E - 10     -30665.5387
DE          -30665.5360    5.067000E - 03     -30665.5390

Methods       Worst        FEs

HPFA       -30665.5354    15000
WCA        -30665.4570    18850
PSO        -30252.3258    70100
HPSO       -30665.5390    81000
PSO-DE     -30665.5387    70100
DE         -30665.5090    240000

Table 14: Comparison of the best solution obtained from the previous
algorithms for constrained problem 3.

DV                 CULDE             WCA          HPFA        Optimal
                                                             solution

[x.sub.1]         0.304887        0.316011      0.316209     0.316227
[x.sub.2]         0.329917        0.316409      0.315936     0.316227
[x.sub.3]         0.319260        0.315392      0.315973     0.316227
[x.sub.4]         0.328069        0.315872      0.316330     0.316227
[x.sub.5]         0.326023        0.316570      0.316688     0.316227
[x.sub.6]         0.302707        0.316209      0.315952     0.316227
[x.sub.7]         0.305104        0.316137      0.315959     0.316227
[x.sub.8]         0.315312        0.316723      0.315921     0.316227
[x.sub.9]         0.322047        0.316924      0.316752     0.316227
[x.sub.10]        0.309009        0.316022      0.316555     0.316227
h(x)         9.910000000E - 04        0       -6.86E - 07        0
f(x)             -0.995413        -0.999981    -0.999989     -1.000000

Table 15: Comparison of statistical results obtained from various
algorithms for constrained problem 3.

Methods     Mean            SD           Best        Worst       FEs

HPFA      -0.999956   1.827886E - 05   -0.999989   -0.999918    100000
WCA       -0.999806   1.910000E - 04   -0.999981   -0.999171    103900
PSO       -1.004879   1.000000E + 00   -1.004986   -1.004269    140100
PSO-DE    -1.005010   3.800000E - 12   -1.005010   -1.005010    140100
DE        -1.025200         NA         -1.025200   -1.025200   8000000

Table 16: Comparison of the best solution obtained from the previous
algorithms for constrained problem 4.

DV                  WCA              HPFA        Optimal
                                                 solution

[x.sub.1]        5.000000          5.000000      5.000000
[x.sub.2]        5.000000          5.000000      5.000000
[x.sub.3]        5.000000          5.000000      5.000000
g1(x)            47.937496        47.937501     47.937500
g2(x)            26.937497        26.937501     26.937500
g3(x)            11.937498        11.937501     11.937500
g4(x)            2.937499          2.937500      2.937500
g5(x)            -0.062500        -0.062500     -0.062500
g6(x)            2.937501          2.937500      2.937500
g7(x)            11.937502        11.937499     11.937500
g8(x)            26.937503        26.937499     26.937500
g9(x)            47.937504        47.937499     47.937500
f(x)          -9.999990E - 01     -1.0000000    -1.000000

Table 17: Comparison of statistical results obtained from various
algorithms for constrained problem 4.

Methods     Mean            SD           Best        Worst       FEs

HPFA      -1.000000   7.358561E - 15   -1.000000   -1.000000    5000
WCA       -0.999999   2.510000E - 07   -0.999999   -0.999998    6100
HPSO      -1.000000   1.600000E - 15   -1.000000   -1.000000    81000
PESO      -0.998875         NA         -1.000000   -0.994000   350000
TLBO      -1.000000   0.000000E + 00   -1.000000   -1.000000    50000

Table 18: Comparison of the best solution obtained from the previous
algorithms for the three-bar truss design problem.

DV                PSO-DE            WCA            HPFA

[x.sub.1]        0.788675        0.788651        0.788674
[x.sub.2]        0.408248        0.408316        0.408251
g1(x)        -5.290000E - 11         0        3.981300E - 11
g2(x)           -1.463747        -1.464024      -1.464098
g3(x)           -0.536252        -0.535975      -0.535902
f(x)            263.895843      263.895843      263.895843

Table 19: Comparison of statistical results obtained from various
algorithms for the three-bar truss design problem.

Methods        Mean              SD              Best

HPFA        263.895942     2.300924E - 04     263.895843
WCA         263.895903     8.710000E - 05     263.895843
PSO-DE      263.895843     4.500000E - 10     263.895843

Methods        Worst         FEs

HPFA         263.896833     10000
WCA          263.896201      5250
PSO-DE       263.895843     17600

Table 20: Comparison of the best solution obtained from the previous
algorithms for the speed reducer problem.

DV               WCA            HPFA          PSO-DE          HEAA

[x.sub.1]      3.500000       3.500000       3.500000       3.500022
[x.sub.2]      0.700000       0.700000       0.700000       0.700000
[x.sub.3]     17.000000      17.000001      17.000000      17.000012
[x.sub.4]      7.300000       7.300002       7.300000       7.300427
[x.sub.5]      7.715319       7.715321       7.800000       7.715377
[x.sub.6]      3.350214       3.350215       3.350214       3.350230
[x.sub.7]      5.286654       5.286655       5.286683       5.286663
f(x)         2994.471066    2994.471705    2996.348167    2994.499107

Table 21: Comparison of statistical results obtained from various
algorithms for the speed reducer problem.

Methods        Mean               SD              Best

HPFA        2994.473059     1.005309E - 03     2994.471705
WCA         2994.474392     7.400000E - 03     2994.471066
PSO-DE      2996.348174     6.400000E - 06     2996.348167
HEAA        2994.613368     7.000000E - 02     2994.499107

Methods        Worst        FEs

HPFA        2994.475855    11000
WCA         2994.505578    15150
PSO-DE      2996.348204    54350
HEAA        2994.752311    40000

Table 22: Comparison of the best solution obtained from the previous
algorithms for the pressure vessel problem.

DV                  WCA               HPFA              CPSO

[x.sub.1]        0.778100           0.778547          0.812500
[x.sub.2]        0.384600           0.384828          0.437500
[x.sub.3]        40.319600         40.338244          42.091300
[x.sub.4]       -200.000000        199.759935        176.746500
g1(x)         -2.950000E - 11     - 1.89E - 05    - 1.370000E - 06
g2(x)         -7.150000E - 11     - 1.15E - 06     -3.590000E - 04
g3(x)        - 1.350000E - 06      -97.382360        -118.768700
g4(x)           -40.000000         -40.240060        -63.253500
f(x)            5885.332700       5886.495946        6061.077700

DV                  GA3

[x.sub.1]        0.812500
[x.sub.2]        0.437500
[x.sub.3]        42.097400
[x.sub.4]       176.654000
g1(x)         -2.010000E - 03
g2(x)         -3.580000E - 02
g3(x)           -24.759300
g4(x)           -63.346000
f(x)            6059.946300

Table 23: Comparison of statistical results obtained from various
algorithms for the pressure vessel problem.

Methods        Mean               SD              Best

HPFA        6321.480545     3.565060E + 02     5886.495946
WCA         6198.617200     2.130490E + 02     5885.332700
CPSO        6147.133200     8.645000E + 01     6061.077700
PSO         8756.680300     1.492567E + 03     6693.721200

Methods        Worst         FEs

HPFA        7106.967827     25000
WCA         6590.212900     27500
CPSO        6363.804100    240000
PSO        14076.324000     8000

Table 24: Comparison of the best solution obtained from the previous
algorithms for the tension compression spring problem.

DV                  WCA                HPFA                CPSO

[x.sub.1]        0.051680            0.051536            0.051728
[x.sub.2]        0.356522            0.353035            0.357644
[x.sub.3]        11.300410           11.508944           11.244543
g1(x)        - 1.650000E - 13     - 1.23442E - 05     -8.250000E - 04
g2(x)         -7.900000E - 14     -3.65699E - 05      -2.520000E - 05
g3(x)            -4.053399           -4.046190           -4.051306
g4(x)            -0.727864           -0.730286           -0.727085
f(x)             0.012665            0.012667            0.012674

DV                  GA3

[x.sub.1]        0.051989
[x.sub.2]        0.363965
[x.sub.3]        10.890522
g1(x)        - 1.260000E - 03
g2(x)         -2.540000E - 05
g3(x)            -4.061337
g4(x)            -0.722697
f(x)             0.012681

Table 25: Comparison of statistical results obtained from various
algorithms for the tension compression spring problem.

Methods     Mean            SD            Best        Worst       FEs

HPFA      0.012727    7.707958E - 05    0.012667    0.013018     22000
WCA       0.012746    8.060000E - 05    0.012665    0.012952     11750
CPSO      0.012730    5.200000E - 04    0.012674    0.012924    240000
GA3       0.012742    5.900000E - 05    0.012681    0.012973     80000

Table 26: Comparison of the best solution obtained from the previous
algorithms for the welded beam problem.

DV               WCA          HPFA          CPSO           GA3

[x.sub.1]     0.205728      0.205730      0.202369      0.205986
[x.sub.2]     3.470522      3.470490      3.544214      3.471328
[x.sub.3]     9.036620      9.036623      9.048210      9.020224
[x.sub.4]     0.205729      0.205730      0.205723      0.206480
g1(x)         -0.034128     -0.004198    -13.655547     -0.103049
g2(x)         -0.000035     -0.004067    -78.814077     -0.231747
g3(x)         -0.000001     0.000000      -0.003500     -0.000050
g4(x)         -3.432980     -3.432983     -3.424572     -3.430044
g5(x)         -0.080728     -0.080730     -0.077369     -0.080986
g6(x)         -0.235540     -0.235540     -0.235595     -0.235514
g7(x)         -0.013503     -0.003946     -4.472858    -58.646888
f(x)          1.724856      1.724853      1.728024      1.728226

Table 27: Comparison of statistical results obtained from various
algorithms for the welded beam problem.

Methods     Mean            SD             Best       Worst      FEs

HPFA      1.724889   1.027784E - E-04    1.724853   1.725354    22000
WCA       1.726427    4.290000E - 03     1.724856   1.744697    46450
CPSO      1.748831    1.290000E - 02     1.728024   1.782143    240000
GA3       1.792654    7.470000E - 02     1.728226   1.993408    80000
COPYRIGHT 2020 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2020 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Qi, Xiangbo; Yuan, Zhonghu; Song, Yan
Publication:Computational Intelligence and Neuroscience
Date:Jun 30, 2020
Words:13015
Previous Article:Layered Concept Lattice Model and Its Application to Build Rapidly Concept Lattice.
Next Article:A New Volumetric CNN for 3D Object Classification Based on Joint Multiscale Feature and Subvolume Supervised Learning Approaches.
Topics:

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |