# A Novel Selection Approach for Genetic Algorithms for Global Optimization of Multimodal Continuous Functions.

1. Introduction

The basic idea of genetic algorithms (GAs) was originated by John Holland in 1960s and was further developed in his book "Adaptation in Natural and Artificial Systems" published in 1975 [1]. GAs are the most efficient procedure to understand and solve problems for which have limited information. These algorithms are able to effectively handle both unconstrained and constrained optimization problems depending on a process of natural selection through biological evolution. The working mechanism of GAs is linked with a search space that contains all possible solutions. Each part of the search space represents one sufficient solution, and its fitness values will be marked by these sufficient solutions, and a set of these solutions is called a population. A set of sufficient solutions will be carried on to the next generation, but weak solutions will be dead based on "survival of fittest" by Darwin's theory of evolution [2].

There are two significant points in the GA process: one is starting point initialization in search space and other is assigning of fitness function [3]. GA starts with the initialization of a population or potential solutions of the problems. This initialization is represented by the chromosomes (individuals), which are a set of genes, with each gene carrying the features of dataset. These chromosomes have their own fitness values depending on the objectives function, so it is very important to determine the solvable objective function.

GA works with the set of solutions and not the decision variables like the other statistical techniques [4-6]. After creating the solutions which are represented by the chromosomes, each of these chromosomes will be evaluated for their fitness depending on the fitness function. Chromosomes that have the fittest value will survive to the next generation. The fitness function depends on the objective of the problem statement. Most of the fitness will be made equal to the objective function value. If the problem statement is to have a minimum cost of some product, then the optimization function here is to find the lowest of the fitness values [7]. Specification in the fitness function is one of the crucial problems in GA because it will determine which chromosomes can survive to the next generation and which will be eliminated from the population.

During the GA process, the feasible solutions in the search space cannot be obtained without reproduction and recombination. Reproduction phase of GA is initiated by the selection of better individuals that will produce new off-spring for the next generation with the hope that the next generation will be improved. The core idea of the selection procedure is to enhance the quality of solutions by giving preference to the most suitable individuals and avoiding bad individuals. By combining the current population's solutions, the new population will hopefully contain better solutions and avoiding loss of genetic material. Furthermore, to make the process more reliable, some of the features in the solutions will be mutated or changed with minor probability. The purpose of crossover and mutation will definitely support to generate better a population than the old one [2, 3].

As we can notice from Figure 1, GA is a stochastic-based heuristic search procedure which is used to set problem-based parameters and making decisions about the following:

(i) Generate initial population

(ii) The process of parents' selection for reproduction of offspring

(iii) Crossover and mutation of individuals

(iv) Predefine stopping criteria

The function of elitism is to make sure that the good and strong chromosomes can be carried to the next generation by storing them outside the current population. Elitism is helpful in presenting the best solution during the process of crossover and mutation [8, 9]. This can be applied in many ways; one way is to combine both parents and child to produce a new population with all competing to survive to the next generation. The use of elitism can help to converge at a global optimal solution [10].

In order to converge at global optima and avoid the local stagnation, a systematic tradeoff mechanism between exploration and exploitation is compulsory. Most of the stochastic-based heuristic search algorithms try to create a balance between two contradictory measures of their performance: exploration (population diversity) and exploitation (selection pressure). Exploration means the capability of an algorithm to search or explore every region of the possible search space and exploitation means to converge at the optimum solution as soon as possible. The suitable adjustment between exploration and exploitation increases the performance of the GA. In this paper, we will handle this problem with help of the proposed selection procedure. The aim of the selection procedure is to exploit the suitable features of fitted individuals in the context of improved solutions, which technically guide the GA for the convergence to a feasible solution for optimization problem [2]. The GA is widely used in various fields of human endeavor including machine learning [11], scheduling [12], signal processing [13], energy [14], robotics [15], manufacturing [16], mathematics [17], routing [18], and many more.

The rest of the paper is organized as follows: concise detail about some conventional selection schemes is discussed in Section 2. Mathematical derivation along with proposed selection scheme is presented in Section 3. Detailed description about benchmark functions are defined in Section 4, while simulated results by using well known benchmark functions along with evaluation tools are revealed and discussed in Section 5. Conclusions of the study are presented in the last section of this paper.

2. Review of Genetic Algorithm Selection Process

There are no specific criteria or theoretical justification to choose an appropriate selection scheme for various problems. This can be an alarming situation due to the application of an inappropriate selection technique on numerical data which can lead to poor performance of the GA regarding reliability of the results. In this section, we will review the reproduction process of individuals and also will present the performance comparison regarding shortcomings and advantages of different selection schemes. Hence, there are several schemes for the selection of individuals from the population. So, for the purpose of conducting comparative performance evaluation studies, numerous GA selection techniques exist in the literature: roulette wheel selection/fitness proportional selection (RWS), linear rank selection (LRS), tournament selection (TS), stochastic remainder selection (SRS), etc.

Roulette wheel selection is another name of fitness proportional selection. This selection technique uses the proportion of the solutions which will affect the area in the wheel. The higher proportions will have a larger area in the wheel and vice versa. In RWS, the wheel will be partitioned according to the probability where the higher probability will have a bigger area and the lower probability will have a smaller area. In this selection technique, a circular wheel connected with a fixed pointer is used for choosing different individuals, which is on the border of the circular wheel [2]. The first individual is selected when the area of the circular wheel comes in front of the fixed pointer. The second individual is selected through the same procedure, and this procedure will be replicated till the selection of last individual. It is very obvious that the individual with highest fitness value will acquire the greater portion on circular wheel and will have a higher possibility of arriving in front of the wheel's fixed pointer when the wheel is spun. Therefore, the probability pi of selecting individuals is directly proportional on its fitness value [19]:

[p.sub.i] = [f.sub.i]/[[summation].sup.W.sub.j=1][f.sub.i]; I [member of] {1.2. ..., W}, (1)

where [f.sub.i] is a fitness value of ith individual and W denotes the size of population.

RWS is a biased selection because the chance of the small area being selected is very low [2]. This selection scheme still has an advantage where the weaker solutions have a limited chance to be selected and may survive in the next generation [20, 21].

In the literature, there are some other selection techniques to overcome the above shortcomings. Hence, LRS is one of the most popular selection techniques, which is more beneficial to handle premature convergence issue as compared to RWS. This selection scheme is focused on rankbased selection procedure, which provides a better opportunity to weaker individuals in the context of uniform scaling. The chromosomes are selected with the probability [p.sub.i] that is linearly proportional to the rank of chromosomes.

[p.sub.i] = 1/W ([[phi].sup.-] + ([[phi].sup.+] - [[phi].sup.-]) i - 1/W - 1); i [member of] {1,2, ..., W}, (2)

where i is the rank of individual according to its fitness value and W is the size of population. Furthermore, [[phi].sup.+] and [[phi].sup.-] are parameters representing the best and worst selection of individuals linked with their ranks, respectively. For the estimation of the above function in equation (2), the constraints are [[phi].sup.+] = 2 - [[phi].sup.-] and [[phi].sup.-] [greater than or equal to] 0. The limitation of this scheme is slower convergence to optimal solution because difference between the best fitted chromosome and other chromosome is not significant due to closeness of values. So, LRS is more beneficial than other techniques due to standardized scaling procedure and also useful to overcome the problem of premature convergence [22].

TS is an extensively used selection technique in GAs. It is also applicable in most of the applied research problems. This selection scheme can be implemented competently and is amenable to parallelization [21]. The simplest form of TS is based on randomized selection of two individuals and conducting a competition to decide which chromosome will win and get selected for the mating pool, and then comparing it to a predetermined selection probability [p.sub.i]. Hence, the predetermined selection probability for individual [p.sub.i] for (t - 1) tournament is given by

[p.sub.i] = 1/[W.sup.t] [((W - i + 1)).sup.t] - [(W - i).sup.t]); i [member of] {1,2, ..., W}, (3)

where W is defined as the population size and t is size of the tournament. For the binary tournament, t = 2, and for large tournament, t > 2. The probability of parameters provides a suitable procedure for adjusting the selection pressure. The TS can also be further extended to involve more than two individuals if desired [22].

The basic idea of the SRS technique is based on the deterministic sampling technique [20]. Each chromosome (individual) in the population has the selection probability based on its comparative fitness value. The SRS uses a concept of removing or copying the strings based on the values of the reproduction counts. The process is done by computing the reproduction count associated with each string. At first, the probability of selection [p.sub.i],

[p.sub.i] = [f.sub.i]/[f.sub.avg]; i [member of] {1, 2,..., W}, (4)

where [f.sub.i] is a fitness value of ith individual. Hence, the expected number of individuals in the mating pool is calculated as population size W:

[e.sub.i] = [p.sub.i] x W. (5)

Integer portion of [p.sub.i] is used to choose as an individual deterministically and then uses RWS or flipping a coin to deal the remaining fractional portion and to fill the rest of portion in mating pool. For example, if the value of [p.sub.i] = 3.8 as described in Figure 2, which means that three copies of chromosomes are directly placed in the mating pool because of integer portion, then the fractional portion of the parents are chosen stochastically.

There are two methods to deal with remainder portion of [p.sub.i]; the first is SRS with replacement and other is SRS without replacement. In SRS with replacement, the remainder part of [p.sub.i] is used to size the portion of RWS process. The resultant probability is proportionate of fractional portion of its scaled value. This selection mechanism provides maximum opportunity of selecting best-fitted individuals of the population. In SRS without replacement technique, flipping a coin determines whether the fractional portion of scale value receives another copy or not.

3. Proposed Selection Scheme

3.1. Defining Problem. In the above context, most of the operators follow one extreme, i.e., exploitation or exploration. Therefore, for achieving the optimal solution, it is more beneficial to adjust selection pressure which maintained population diversity during the selection process. More illustratively, we considered RWS and LRS which are both extremes in selection of individuals [22]. Generally, LRS mainly focuses on maintaining population diversity (more technically known as exploration) by compromising selection pressure resulting delayed convergence and RWS emphasizes on selection pressure (known as exploitation) with shortcomings of premature convergence.

3.2. Proposed Scheme (Proportionate Selection). To overcome the shortcomings of conventional selection schemes, we proposed a balanced selection approach associated with suitable tradeoff between exploitation and exploration, which basically decreases the effect of selection pressure and assure some genetic diversity within population. In other words, it will be a fine adjustment between selection pressure and loss of population diversity.

Here, the newly proposed selection scheme will be helpful in improving the search space through proportionate probabilistic approach. The initiation of probabilistic weights to individuals will definitely introduce greater diversity in the population, thus offering better solutions with sustainable convergence speed. Thus, the new selection scheme creates a sustainable adjustment between exploitation and exploration. Hence, a modified selection scheme is going to be proposed, named stairwise selection (SWS). Its objective is to overcome the disadvantages of other selection schemes by providing a comparatively better opportunity to the weak individuals for maintaining population diversity. This newly selection mechanism is designed in such a way that the resulting generation has a limited chance of deterioration.

The working phenomenon of SWS proceeds by assigning ranks to all individuals from worst to best criterion according to their fitness values. The ranked population of size W is given below:

1 + 2 + 3 + ... + W/2 + ... + W. (6)

First, we divided the whole population into five equal portions as

[mathematical expression not reproducible]. (7)

Hence, the selection probability of each individual "i" is according to the following function:

[mathematical expression not reproducible], (8)

where [q.sub.1] + [q.sub.2] + [q.sub.3] + [q.sub.4] + [q.sub.5] = 1 and the suitable probabilities weights are revealed in

[mathematical expression not reproducible]. (9)

The pseudocode of SWS is given in Algorithm 1.
```Algorithm 1: The pseudocode of stairwise selection scheme.

Generate the individuals of size W
Sort in ascending order after fitness
Create a table t
T [left arrow] 1: W
i [left arrow] 1
while i [less than or equal to] W do
For 1 [less than or equal to] i [less than or equal to] W/5)
p(i) = [q.sub.1] (50i/(W(W + 5)))
end
For (W/5) [less than or equal to] i [less than or equal to] (2W/5)
p(i) = [q.sub.2] (50i/ (W(3W + 5)))
end
For (2W/5) [less than or equal to] i [less than or equal to] (3W/5)
p(i) = [q.sub.3](10i/(W(W + 1)))
end
For (3W/5) [less than or equal to] i [less than or equal to] (4W/5)
p(i)= [q.sub.4](50i/(W(7W + 5)))
end
For (4W/5) [less than or equal to] i [less than or equal to] W
p (i) = [q.sub.5] (50i/ <W <9W + 5))) \\where,
q1 + q2 + q3 + q4 + q5 = 1
end
i [left arrow] i + 1
end while
```

The performance of the GA is usually examined through the optimum value and number of generations required to get the optimum solution. For visual understanding and close comparison of different selection schemes, we considered a population of ten individuals. Figure 3(a) shows that the individuals "1" to "3" have a limited chance to get selected because of the small portion in the roulette wheel instead of "7" to "10" with higher portion. Hence, current distribution of individuals in RWS increases selection pressure and reduce population diversity. Conversely, the distribution of LRS for individuals will delay the convergence due to uniform scaling. Figure 3(c) shows that TS is giving more weight to individuals "1" to "3" as compared to RWS, which means that TS is somehow managing selection pressure and population diversity. Now, the newly proposed selection scheme, i.e., (SWS) has a better control over the above two extremes, i.e., selection pressure and population diversity. Because individuals "1" to "3" have a sufficient chance to be selected and "7" to "10" also have an adequate representation, there is an adequate balance between exploitation and exploration.

For more realistic visual comparison, we considered a population of hundred individuals. Figure 4 clearly visualizes that the graphical line of SWS occurs in between conventional selection schemes, which reflects that this novel selection scheme seems to have a better control over selection pressure, and it is more beneficial to maintain population diversity. In other words, it would be a perfect tradeoff between exploration and exploitation.

3.3. The Sampling Methodology. An efficient sampling procedure is required to select individuals for mating process through the mechanism of two-step selection. This sampling procedure fills the mating pool with copies of individuals of the given population, while respecting the selection probabilities [p.sub.i], such that the observed and expected number of individuals are equal. Among the widely used sampling procedures, we commonly used the roulette wheel sampling technique (or Monte Carlo sampling) for evaluating the efficiency of the newly proposed SWS operator.

3.3.1. Chi-Square Goodness-of-Fit Measure. [chi square] is used as a tool to measure the mean difference between observed and expected number of offspring. This measure was first time introduced by Schell and Wegenkittl [23] for average accuracy. Initially, there are k mutually exclusive classes as [mathematical expression not reproducible] denote the cu mulative expectation and [mathematical expression not reproducible] represent the observed/actual copies of individuals in the mating pool followed by the sampling process. Preferably, the order of [[epsilon].sub.j] should be W/k for 1 [less than or equal to] j [less than or equal to] k. So, on average, each class contains equal number of individuals, and there should be at least 10 number of classes to attain the required accuracy. Schell and Wegenkittl [23] suggested the Chi-square test as a measure to evaluate the efficiency of the sampling procedure as follows:

[chi] = [k.summation over (j=i)] [([[epsilon].sub.j] - [O.sub.j]).sup.2]/[[epsilon].sub.j]. (10)

In the context of the roulette wheel sampling scenario, the abovementioned constraint, i.e., [[epsilon].sub.j] [greater than or equal to] 10, [chi] should follow Chi-square distribution with k - 1 degree of freedom. This distribution is asymptotic of [chi] under multinomial distributed [o.sub.i] when W [right arrow] [infinity]. According to the present research study, the concern-fixed parameters are the population size W = 100, number of classes = 10, and total number of tests s = 100.

The results in Table 1 reveal the probability distribution of SWS along with corresponding cumulative expectation, which are close to W/k = 100/10. We used [[chi].sup.SW,R] to evaluate the results of [chi]. In [[chi].sup.SW,R], SW denotes the proposed operator that assigns selection probabilities to the individuals and R represents a technique of sampling algorithm. Mainly, this test is used to estimate the expectation and its variance. The population generated randomly with predefined specific individuals and used the probability distribution R to assign them probabilities for the process of selection followed by sampling procedure R is applied to obtain instance of [O.sub.j] and [[chi].sup.SW,R], respectively. The sample mean and variance can be obtained through sequence ([[chi].sup.SW,R]) with 1 [less than or equal to] w [less than or equal to] s as given below:

[mathematical expression not reproducible]. (11)

For the purpose of evaluation, this technique is compared with theoretical distribution [[chi].sup.2.sub.k-1] at 99% confidence level. The mean and variance of [chi square] distribution are k - 1 = 9 and 2(k - 1) = 18 for 10 classes. Hence, the corresponding estimates of [mathematical expression not reproducible] are 9.1025 and 19.8583, respectively. The above estimates are almost similar and comparatively more accurate in terms of symbolic representation between assigning probabilities to the individuals and the number of copies related to their respective probabilities coming in the mating pool. The simulated results authenticate the overall performance of the sampling procedure with respect to probability distribution of SWS. Hence, the roulette wheel sampling technique provides the empirical distribution function that cannot be significantly different from theoretical distribution [mathematical expression not reproducible] estimates.

4. Benchmark Functions

There is not a rule of thumb for the evaluating the performance of the GA by choosing an appropriate optimization function. Therefore, the performance of the algorithm is based on the nature of the problem regarding variation rate in objective function, the number of local optima, etc. [24]. A multimodal function has at least two local optima. The efficient search procedure must be proficient of eliminating the region around local optimum in context of the search for global optima. The scenario becomes more complex in situation of random distribution of local optima in the search space.

The dimensionality of the search space is another significant factor which makes the problem more complicated. A comprehensive study regarding dimensionality problem and its characteristics was carried out by Friedman [25]. During the search process, value regarding global optimum needs to be obtained efficiently. Hence, the areas close to local minima must be avoided as much as possible. If the local optima are randomly distributed in the search area, then it is considered to be a most difficult problem. The optimization process focuses on obtaining the global optimum point; consequently, the regions nearby local optima should be circumvented because the optimization process might be stuck at local optima and then local optima are considered to be as global optima. To evaluate the performance and sustainability of the proposed selection operators, we used ten unimodal, multimodal, separable or non-separable, convex, and continuous benchmark functions. Table 2 presents the list of benchmark functions [16, 26-42] utilized to appraise the efficiency of the suggested evolutionary methods. Hence, the benchmark function's name, limit, properties, and fitness function are presented in Table 2. These benchmark factions have varying complexities that are most commonly applied in many comparative studies. The necessary details regarding these benchmarks are given below:

5. Computational Results and Discussions

5.1. Experimental Setup. In this section, we focused on the experimental results of four conventional and one proposed GA selection schemes. The overall efficiency of these selection schemes can be influenced by the use of fixed parameters with additional experimental conditions. Hence, the suitable values for fixed parameters such as population size, crossover and mutation probability, number of generation, and scaling function. Table 3 shows the value of fixed parameters that are used for optimization problems. The performance of these selection schemes is evaluated on ten benchmark functions using MATLAB version R2015a. The simulated results of these runs are obtained in terms of mean and standard deviation (S.D). An independent t-test is also executed to examine the significant difference between different selection schemes. The P value along with mean and S.D of thirty runs are reported in subsequent tables. The sign of "*" indicates the significant difference with the proposed technique and "a" defines the significance difference with reference technique.

5.2. Experimental Results. In this experimental study, the optimum values regarding GA were obtained through screening experimentation and trial run. The algorithms were executed thirty times, and the mean value and standard deviation are taken as final results. All experiments are terminated in this study when number of generations achieved the maximum numbers of generation.

The basic objective of this study is to make a comparison between different conventional selection schemes with the proposed one in the context of optimal solution by using benchmark functions. The overall statistical results of Table 4 clearly show that SWS obtained a minimum mean value and low S.D compared to other selection techniques from 10 to 100 dimensions. But there is a nonsignificant difference between SWS and TS at some benchmark functions. As we can notice for Axis Parallel Hyper Ellipsoid function, when dimensions of the study increase from 10 to 100, the average rate of change is in between 706 and 3052 because of function complexity. Hence, the minimum average rate of change is 706 in SWS and maximum is 3052 under RW at lower dimensions. The p value of t-tests further reduced with increase in the dimensions of experiment that actually tends toward significance of the results. About Colville function, SWS is the best-performing selection technique with the mean value of 1.39 at 10 dimensions with highly significant differences. When we increase the dimensions up to 100, the optimum value increases up to 5940 in the Colville function. Hence, the average rate of change is much high due to complexity. According to Table 4, the results of Ellipsoidal family function reveal that the proposed selection scheme (SWS) is the best-performing approach with minimum mean value of 0.0000 at lower dimensions, but at higher dimensions, the average rate of change is 187286 which is at the higher side. Another unimodal function is Rosenbrock; its statistical results about SWS are close to the theoretical optimum value which means that the proposed selection technique is efficiently handle complex problems at higher dimensions. The average rate of change in the Schaffer function is considerably low which shows that SWS efficiently performs at higher dimensions. The optimum value of SWS is ranging from 4.14 to 45.61 in the Schaffer function for 10-100 dimensions.

According to the results of the Beale function, Table 5 shows that the optimum value is obtained through TS. Moreover, SWS has significant difference with LRS but not with RWS, TS, and SRS at lower dimensions. When we increase the dimension, p value will also reduce from 0.9807 to 0.0000, and the average rate of change of TS is 583 which is considered close to the theoretical optimum value as compared to other selection techniques including SWS.

The SWS also achieves the minimum average rate on the Bohachevsky function, i.e., 98. Furthermore, the average rate of change is 84 for SWS which is the lowest in all other schemes from low to high dimensions. Moreover, the results of Bohachevsky benchmark function in Table 5 reveal that SWS distinctly performs better than all other selection schemes in terms of least empirical values. Moreover, by increasing the dimensions of experiment, SWS significantly differs at higher dimensions and show non-significance difference at lower dimensions with TS and RS.

According to the results in Table 5, SWS is considerably close to the theoretical optimum value under Drop-wave and Egg-holder benchmark functions, but the average rate of change in Egg-holder function is much higher when we increase the dimensions as compare to Drop-wave function. Hence, SWS efficiently handles selection pressure and makes improvements in population diversity at broader dimensions due to minimum average rate of change. In the Schwefel multimodal function, the empirical value is between -2898 and -11872 from low to high dimensions, which are quite high with reference to the theoretical optimum value due to the complexity of function. Overall statistical results of multimodal functions show that SWS outperforms than other selection techniques along with highly significant difference.

In the context of above discussion, it is demonstrated the substantial amount of effectiveness of the newly proposed selection technique over the standard GA techniques. Additionally, SWS selection technique ensured a broader and comprehensive search and avoided premature convergence to the optimum solutions in unimodal and multimodal benchmark functions. The newly proposed technique efficiently handles the problem of selection pressure and extends the diversity by intensifying the scope of the search process. This scheme is also reducing the possibility of less favorable solutions at higher as well as lower dimensions. In addition, the proportionate selection strategy ensures that best solutions are always carried forward to the next generation. In fact, SWS enhances the exploration of future generations and reduces the chance of premature convergence at local minima.

5.3. Overall Performance. The empirical results of conventional selection schemes (RWS, TS, LRS, and SRS) along with proposed SWS are evaluated on ten benchmark functions. The statistical results of Table 6 reveal that SWS outperforms in almost all benchmark functions regarding robustness, stability, and effectiveness of the solutions.

TS is the second best selection scheme because its optimum values are considerably close to SWS and sometimes have nonsignificant difference between these two. SWS equally efficient for unimodal and multimodal functions but the average rate of change is comparatively high in multimodal functions. Furthermore, SWS also performs efficiently when increasing the dimensions of experiment from 10 to 100 and also establish a suitable adjustment between exploitation and exploration. The influence of results in Table 6 confirms that SWS has a firm grip on controlling selection pressure and population diversity.

5.4. Performance Index (PI). After descriptively evaluating the performance of stairwise selection operator with others, our next goal is to make a comparison between GAs' selection schemes based on relative performance index (PI) defined by Bharti [43]. This performance index was specifically used to analyze the behavior of some controlled stochastic search techniques. The PI is a widely used mechanism for comparing population-based heuristic algorithms [44, 45]. The PI can be mathematically derived in following way:

[mathematical expression not reproducible], (12)

where

[mathematical expression not reproducible], (13)

where [M.sup.i] = mean value of objective function for ith optimization problem, L[M.sup.i] = least mean value of objective function obtained by all algorithms for ith optimization problem, [S.sup.i] = standard deviation of objective function for ith optimization problem, L[S.sup.i] = least standard deviation value of objective function obtained by all algorithms for ith optimization problem, MA[E.sup.i] = the value of mean absolute error of objective function for ith optimization problem, LMA[E.sup.i] = least mean absolute error value of objective function obtained by all algorithms for ith optimization problem, [W.sub.p] = the total population to be analyzed.

[[theta].sub.1], [[theta].sub.2], and [[theta].sub.3] ([[theta].sub.1] + [[theta].sub.2] + [[theta].sub.3] = 1 and 0 < [[theta].sub.1], [[theta].sub.2], [[theta].sub.3] < 1) are weights assigned to three statistics that were considered, respectively.

In the context of the above definition, it is revealed that PI is a function of [[theta].sub.1], [[theta].sub.2], and [[theta].sub.3], respectively. Since [[theta].sub.1] + [[theta].sub.2] + [[theta].sub.3] = 1, one of [[theta].sub.i], i = 1, 2, 3 could be eliminated to reduce the number of dependent variables from the expression of PI (equation (12)). However, it is still difficult to graphically examine the behavior of all GAs' selection techniques due to overlapping of the surface plot of PI. So, we adopt the modified mechanism is the subsequent section by assigning same weights to any two terms in PI (equation (12)). Hence, the PI becomes a function of single variable. The resultant cases are given below:

[mathematical expression not reproducible]. (14)

The graphical representation for cases (1-3) in Figures 5-7 reveal that the horizontal axis define weights (wt) and performance index (PI) scaled on the vertical axis. The PI of proposed SWS is superimposed in Figures 5 and 7 as compared to other selection schemes which show a substantial enhancement towards perfection. Moreover, SWS shows considerable improvement at lower weights in terms of PI in Figure 6. More specifically, the graphical representation of PI endorses the improved performance of SWS.

6. Conclusions

In current study, we focused on the relative performance among various selection techniques to obtain the optimal solution for given test problems. A set of selection techniques including roulette wheel selection (RWS), linear rank selection (LRS), tournament selection (TS), stochastic remainder selection (SRS), and stairwise selection (SWS) were considered, and their performance was evaluated through ten well-known benchmark functions with 10 to 100 dimensions. These benchmark functions cover various characteristics including convex, separable, nonseparable, unimodal, and multimodal. Additionally, the results of Chi-square goodness of fit test show improvements regarding proposed selection technique, and there is also an insignificant difference between expected and actual number of offsprings. The statistical results of this study also show that the proposed selection technique (SWS) performed best in nine out of ten benchmark functions because of proportionate selection methodology. Furthermore, the simulated results reveal that the performance of SWS is significantly improved for unimodal and multimodal benchmark functions. When increasing the dimensions of experiments, SWS also performed efficiently under complex circumstances of dimensionality. The variability of results reveals that the proposed scheme has a better control over selection pressure and loss of population diversity. Therefore, SWS found a suitable adjustment between exploitation and exploration due to split ranked ideology. According to the results, TS is the second best selection technique after SWS, and sometimes there is insignificance difference between these two. Finally, the numerical outcomes of proposed technique are very close to theoretical optimum value which is an evidence of the best-performing selection technique with authentication of performance index (PI).

https://doi.org/10.1155/2019/8640218

Data Availability

The data used to support the findings of this manuscript are taken from the website (https://www.sfu.ca/ssurjano/ optimization.html).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors are very grateful to Deanship of Scientific Research at King Khalid University, Abha, Saudi Arabia, for the financial support through General Research Program under project number GRP-32-41.

References

[1] J. H. Holland, Adaptation in Natural and Artificial Systems: an Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, London, UK, 1992.

[2] A. M. Aibinu, H. Bello Salau, N. A. Rahman, M. N. Nwohu, and C. M. Akachukwu, "A novel clustering based genetic algorithm for route optimization," Engineering Science and Technology, an International Journal, vol. 19, no. 4, pp. 2022-20-4, 2016.

[-] S. N. Sivanandam and S. N. Deepa, "Genetic algorithms," in Introduction to Genetic Algorithms, pp. 15-37, Springer, Berlin, Germany, 2008.

[4] T. Deepa and M. Punithavalli, "An analysis for mining imbalanced datasets," International Journal of Computer Science and Information Security, vol. 8, no. 1, pp. 132-1-7, 2010.

[5] A. E. Eiben and S. K. Smit, "Parameter tuning for configuring and analyzing evolutionary algorithms," Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 19-31, 2011.

[6] M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, London, UK, 1998.

[7] T. Back, F. Hoffmeister, and H. P. Schwefel, "Extended selection mechanisms in genetic algorithms," in Proceedings of

the Fourth International Conference on Genetic Algorithms, R. Belew and L. B. Booker, Eds., Morgan Kaufmann Publishers, pp. 92-99, San Mateo, CA, USA, July 1991.

[8] R. K. Bhattacharjya, Introduction to Genetic Algorithms, Vol. 12, IIT Guwahati, Guwahati, Assam, 2012.

[9] W. K. Mashwani, A. Salhi, M. A. Jan, R. A. Khanum, and M. Sulaiman, "Impact analysis of crossovers in a multi-objective evolutionary algorithm," Science International, vol. 27, no. 6, pp. 4943-4956, 2015.

[10] G. Rudolph, "Convergence analysis of canonical genetic algorithms," IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 96-101, 1994.

[11] J. Sachdeva, V. Kumar, I. Gupta, N. Khandelwal, and C. K. Ahuja, "Multiclass brain tumor classification using GASVM," in Proceedings of the 2011 Developments in E-Systems Engineering, pp. 182-187, IEEE, Dubai, UAE, December 2011.

[12] C. Cheng, Z. Yang, L. Xing, and Y. Tan, "An improved genetic algorithm with local search for order acceptance and scheduling problems," in Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Production and Logistics Systems (CIPLS), pp. 115-122, IEEE, Singapore, April 2013.

[13] D. Jiang and Z. Fan, "The algorithm for algorithms: an evolutionary algorithm based on automatic designing of genetic operators," Mathematical Problems in Engineering, vol. 2015, Article ID 474805, 15 pages, 2015.

[14] M. Fayaz, H. Shah, A. Aseere, W. Mashwani, and A. Shah, "A framework for prediction of household energy consumption using feed forward back propagation neural network," Technologies, vol. 7, no. 2, p. 30, 2019.

[15] Q. Yang, M. Yu, S. Liu, and Z. M. Chai, "Path planning of robotic fish based on genetic algorithm and modified dynamic programming," in Proceedings of the 2011 International Conference on Advanced Mechatronic Systems, pp. 419-424, IEEE, Zhengzhou, China, August 2011.

[16] F. Yuan, C. Li, X. Gao, M. Yin, and Y. Wang, "A novel hybrid algorithm for minimum total dominating set problem," Mathematics, vol. 7, no. 3, p. 222, 2019.

[17] H. H. Fu, Z. J. Li, G. W. Li, X. T. Jin, and P. H. Zhu, "Modeling and controlling of engineering ship based on genetic algorithm," in Proceedings of the 2012 International Conference on Modelling, Identification and Control, pp. 394-398, Wuhan, Hubei, China, IEEE, June 2012.

[18] X. Jing, Y. Liu, and W. Cao, "A hybrid genetic algorithm for route optimization in multimodal transport," in Proceedings of the 2012 Fifth International Symposium on Computational Intelligence and Design, vol. 1, pp. 261-264, IEEE, Hangzhou, China, October 2012.

[19] D. Beasley, D. R. Bull, and R. R. Martin, "An overview of genetic algorithms: part 1, fundamentals," University Computing, vol. 15, no. 2, pp. 56-69, 1993.

[20] R. Sivaraj and T. Ravichandran, "A review of selection methods in genetic algorithm," International Journal of Engineering Science and Technology, vol. 3, 2011.

[21] R. Khanum, M. Jan, N. Tairan et al., "Global evolution commended by localized search for unconstrained single objective optimization," Processes, vol. 7, no. 6, p. 362, 2019.

[22] M. Mitchell, An Introduction to Genetic Algorithm, Prentice Hall of India, Delhi, India, 1996.

[23] T. Schell and S. Wegenkittl, "Looking beyond selection probabilities: adaptation of the [chi]2 measure for the performance analysis of selection methods in GAs," Evolutionary Computation, vol. 9, no. 2, pp. 243-256, 2001.

[24] M. Srinivas and L. M. Patnaik, "Adaptive probabilities of crossover and mutation in genetic algorithms," IEEE Transactions on Systems, Man, and Cybernetics, vol. 24, no. 4, pp. 656-667, 1994.

[25] J. H. Friedman, "An overview of predictive learning and function approximation," in From Statistics to Neural Networks, pp. 1-61, Springer, Berlin, Germany, 1994.

[26] A. Hussain, Y. S. Muhammad, and M. N. Sajid, "An efficient genetic algorithm for numerical function optimization with two new crossover operators," International Journal of Mathematical Sciences and Computing, vol. 4, no. 4, pp. 41-55, 2018.

[27] X. S. Yang and A. Hossein Gandomi, "Bat algorithm: a novel approach for global engineering optimization," Engineering Computations, vol. 29, no. 5, pp. 464-483, 2012.

[28] W. K. Mashwani and A. Salhi, "A decomposition-based hybrid multiobjective evolutionary algorithm with dynamic resource allocation," Applied Soft Computing, vol. 12, no. 9, pp. 2765-2780, 2012.

[29] X. S. Yang, Z. Cui, R. Xiao, A. H. Gandomi, and M. Karamanoglu, Swarm Intelligence and Bio-Inspired Computation: Theory and Applications, Elsevier, Amsterdam, Netherlands, 2013.

[30] S. Mirjalili, S. M. Mirjalili, and A. Lewis, "Grey wolf optimizer," Advances in Engineering Software, vol. 69, pp. 46-61, 2014.

[31] W. K. Mashwani and A. Salhi, "Multiobjective memetic algorithm based on decomposition," Applied Soft Computing, vol. 21, pp. 221-243, 2014.

[32] J. Parapar, M. M. Vidal, and J. Santos, "Finding the best parameter setting: Particle Swarm Optimization," in Proceedings of the 2nd Spanish Conference on Information Retrieval, pp. 49-60, Barcelona, Spain, October 2012.

[33] T. Stutzle, M. Lopez-Ibanez, P. Pellegrini et al., "Parameter adaptation in ant colony optimization," in Autonomous Search, pp. 191-215, Springer, Berlin, Germany, 2011.

[34] M. Jamil and X. S. Yang, "A literature survey of benchmark functions for global optimization problems," 2013, https:// arxiv.org/abs/1308.4008.

[35] P. Civicioglu and E. Besdok, "A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms," Artificial Intelligence Review, vol. 39, no. 4, pp. 315-346, 2013.

[36] W. K. Mashwani, A. Salhi, O. Yeniay, H. Hussian, and M. A. Jan, "Hybrid non-dominated sorting genetic algorithm with adaptive operators selection," Applied Soft Computing, vol. 56, pp. 1-18, 2017.

[37] R. V. Rao and V. Patel, "An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems," Scientia Iranica, vol. 20, no. 3, pp. 710-720, 2013.

[38] S. Ghosh, S. Das, D. Kundu, K. Suresh, and A. Abraham, "Inter-particle communication and search-dynamics of lbest particle swarm optimizers: an analysis," Information Sciences, vol. 182, no. 1, pp. 156-168, 2012.

[39] W. K. Mashwani, A. Salhi, O. Yeniay, M. A. Jan, and R. A. Khanum, "Hybrid adaptive evolutionary algorithm based on decomposition," Applied Soft Computing, vol. 57, pp. 363-378, 2017.

[40] T. Liao, M. A. Montes de Oca, D. Aydin, T. Stutzle, and M. Dorigo, "An incremental ant colony algorithm with local search for continuous optimization," in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation--GECCO'11, pp. 125-132, July 2011.

[41] M. Sulaiman, A. Salhi, A. Khan, S. Muhammad, and W. Khan, "On the theoretical analysis of the plant propagation algorithms," Mathematical Problems in Engineering, vol. 2018, Article ID 6357935, 8 pages, 2018.

[42] T. Ma, Q. Yan, W. Xuan, and B. Wang, "A comparative study of quantum evolutionary algorithm and particle swarm optimization for numerical optimization problems," International Journal of Digital Content Technology and Its Applications, vol. 5, no. 7, pp. 182-190, 2011.

[43] Bharti, Controlled random search techniques and their applications, Ph.D. thesis, Department of Mathematics, University of Roorkee, Roorkee, India, 1994.

[44] C. Mohan and H. T. Nguyen, "A controlled random search technique incorporating the simulated annealing concept for solving integer and mixed integer global optimization problems," Computational Optimization and Applications, vol. 14, no. 1, pp. 103-132, 1999.

[45] M. Thakur, "A new genetic algorithm for global optimization of multimodal continuous functions," Journal of Computational Science, vol. 5, no. 2, pp. 298-311, 2014.

Ehtasham-ul Haq, (1) Ishfaq Ahmad [ID], (1,2,3) Abid Hussain [ID], (4) and Ibrahim M. Almanjahie [ID] (2,3)

(1) Department of Mathematics and Statistics, International Islamic University, Islamabad, Pakistan

(2) Department of Mathematics, King Khalid University, 61413 Abha, Saudi Arabia

(3) Statistical Research and Studies Support Unit, King Khalid University, 61413 Abha, Saudi Arabia

(4) Department of Statistics, Quaid-i-Azam University, Islamabad, Pakistan

Correspondence should be addressed to Abid Hussain; abid0100@gmail.com

Received 21 July 2019; Accepted 26 October 2019; Published 5 December 2019

Caption: Figure 1: Layout of genetic algorithm.

Caption: Figure 2: Stochastic remainder selection scheme.

Caption: Figure 4: Comparative view of selection schemes.

Caption: Figure 5: The working strategy of GA selection operators with proposed SWS for case 1.

Caption: Figure 6: The working strategy of GA selection operators with proposed SWS for case 2.

Caption: Figure 7: The working strategy of GA selection operators with proposed SWS for case 3.
```Table 1: Classes of [C.sub.j] and overall
expectation [[epsilon].sub.j] for SWS.

j    [C.sub.j]   [[epsilon].sub.j]

1      1-28           9.8196
2      29-40          10.1803
3      41-51          10.0198
4      52-60          9.9802
5      61-69          10.3723
6      70-77          10.4255
7      78-84          10.5833
8      85-90          10.1519
9      91-95          8.9917
10    96-100          9.4751

Table 2: Detail of benchmark functions for comparison.

Benchmark             Fitness function          Search limits

Axis parallel    [mathematical expression       [-5.12, 5.12]
ellipsoid            not reproducible]

Beale            [mathematical expression        [-4.5, 4.5]
Continues            not reproducible]

Bohachevsky      [mathematical expression        [-100, 100]
not reproducible]
Colville         [mathematical expression         [-10,10]
not reproducible]

Drop-wave        [mathematical expression       [-5.12, 5.12]
not reproducible]
Egg-holder       [mathematical expression       [-5.12, 5.12]
not reproducible]
Ellipsoidal      [mathematical expression          [-n, n]
not reproducible]
Rosenbrock       [mathematical expression      [-2.048, 2.048]
not reproducible]
Schaffer         [mathematical expression        [-100, 100]
not reproducible]
Schwefel         [mathematical expression        [-500, 500]
not reproducible]

Benchmark       Optimum value           Properties

Axis parallel         0         Continues, convex, unimodal
ellipsoid

Beale                 0          Multimodal, nonseparable
Continues

Bohachevsky           0          Multimodal, nonseparable

Colville              0           Unimodal, nonseparable

Drop-wave            -1          Multimodal, nonseparable

Egg-holder        -959.6407        Nonconvex, multimodal

Ellipsoidal           0                  Unimodal

Rosenbrock            0           Unimodal, nonseparable

Schaffer              0           Unimodal, nonseparable

Schwefel              0          Multimodal, nonseparable

Table 3: Specific parameters for GAs' working strategy.

Parameter                  Value

Population size             100
Fitness scaling      Proportional/rank
Elite count                0.05
Crossover fraction          0.8
Crossover operator       Two point
Migration fraction          0.2
Generations                 200
Function tolerance       1.E - 06
Mutation function        Gaussian

Table 4: Statistical results of optimum values for different
selection schemes using unimodal benchmark functions.

Benchmark                                            Selection schemes

Dimension   Statistics        RW             TS

Mean      5.0835E - 05   3.2-20E - 07
10          S.D       7.9800E - 05   2.5215E - 07
t-test       0.00099        0.64287
Mean      3.1488E + 02   1.9369E + 01
Axis parallel      50          S.D       1.847-E + 02   1.0019E + 01
hyper                         t-test       0.00000        0.48486
ellipsoid                      Mean      3.0516E + 03   7.7562E + 02
100         S.D       8.2320E + 02   2.0261E + 02
t-test       0.00000        0.16921

Mean        14.1036         1.4075
10          S.D         64.1057         1.9450
t-test        0.2822         0.976-
Mean       2465.569-       613.2918
Colville           50          S.D        2717.5439       287.6108
t-test        0.0004         0.9-0-
Mean       19237.7118     6003.8649
100         S.D        6243.5664      1519.1380
t-test        0.0000         0.871-

Mean      5.8835E - 06   3.69 - E - 07
10          S.D       8.9408E - 06   4.7645E - 07
t-test       0.00125        0.7805-
Mean       2408.7101       716.6584
Ellipsoidal        50          S.D         941.5905       -92.6978
t-test        0.0000         0.5125
Mean       87681.5841     -4427.98-8
100         S.D        18184.-18-     9613.1629
t-test        0.0000         0.2010

Mean         6.987-         7.015-
10          S.D          3.687          1.8204
t-test        0.0000         0.0000
Mean        444.0538       263.-415
Rosenbrook         50          S.D         159.6747       47.2798
t-test        0.0000         0.0712
Mean       -491.-158      1296.4477
100         S.D        1123.5970       2-0.0574
t-test        0.0000         0.0681

Mean         4.5650         4.5645
10          S.D          0.0089         0.0084
t-test        0.0000         0.0000
Mean        25.5110        25.2177
Schaffer           50          S.D          0.1419         0.0708
t-test        0.0000         0.0000
Mean        52.6225        51.7076
100         S.D          0.2535         0.2024
t-test        0.0000         0.0000

Benchmark                                Selection schemes

Dimension        RS              LRS

1.5786E - 05     3.3563E - 05
10       3.5962E - 05     5.8134E - 05
0.02170         0.00619
3.068-E + 02     3.2088E + 02
Axis parallel      50       1.9457E + 02     1.9967E + 02
hyper                          0.00000         0.00000
ellipsoid                   2.8707E + 03     3.16-7E + 03
100      7.9006E + 02     1.0092E + 03
0.00000         0.00000

5.2867          10.0411
10          24.4547         44.6261
0.-882           0.0001
2293.8499       2380.0555
Colville           50         1873.1595       2295.6976
0.0000           0.0000
20567.1507       19902.7771
100        8549.9094       7-97.08-8
0.0000           0.0000

2.1295E - 05     5.1129E - 05
10       8.7 - 6E - 05    5.4186E - 05
0.19-89         0.00477
2228.5647       2423.966-
Ellipsoidal        50         784.5990         956.8467
0.0000           0.0000
8-676.46-6       87696.8404
100       14-42.664-       18199.5746
0.0000           0.0000

5.9280           8.2780
10          3.0867           5.2072
0.0000           0.0000
-48.7025         443.6579
Rosenbrook         50         110.5-76         182.-859
0.0000           0.0000
2971.5222       -461.4764
100        653.2587        1118.485-
0.0000           0.0000

4.565-           4.5694
10          0.0076           0.0103
0.0000           0.0000
25.1508         25.579-
Schaffer           50          1.92-2           0.1418
0.0000           0.0000
52.5572         52.9025
100         0.-154          0. - 09
0.0000           0.0000

Benchmark

Dimension        SWS

2.9418E - 07
10        2.2951E - 07

1.76-0E + 01
Axis parallel      50        9.1199E + 00
hyper
ellipsoid                    7.0599E + 02
100       1.8442E + 02

1.-926
10           1.9245

606.8044
Colville           50          284.5685

5940.3560
100        1503.0685

3.-6-9E - 07
10       4. - 96E - 07

652.748-
Ellipsoidal        50          -57.6778

-1-57.7650
100        8755.880-

6.4416
10           1.6715

241.8055
Rosenbrook         50          43.41-2

1190.4242
100        211.24 -

4.1455
10           0.0076

22.2471
Schaffer           50           0.064-

45.6164
100          0.18-8

Table 5: Statistical results of optimum values for different
selection schemes using multimodal benchmark functions.

Selection schemes

Benchmark     Dimension   Statistics        RW             TS

Mean      2.5691E + 01   2.3543E + 01
10          S.D       5.5649E + 00   4.7657E + 00
t-test       0.98075        0.10591
Mean      3.2013E + 02   2.1626E + 02
Beale            50          S.D       6.3963E + 01   2.3247E + 01
t-test       0.00000        0.00305
Mean      9.9730E + 02   6.0705E + 02
100         S.D       1.9613E + 02   3.9781E + 01
t-test       0.00000        0.00000

Mean      5.5629E--01    5.5020E--07
10          S.D       1.1163E + 00   3.9736E--07
t-test       0.00838        0.67454
Mean        88.2869        19.2995
Bohachevsky      50          S.D         29.3399         3.6071
t-test        0.0000         0.1083
Mean        251.3332       105.8799
100         S.D         41.7324        14.7719
t-test        0.0000         0.0330

Mean        -8.4413        -8.3885
10          S.D          0.1928         0.2039
t-test        0.0000         0.0000
Mean        -36.9639       -37.6504
Drop-wave        50          S.D          1.7057         1.4806
t-test        0.0000         0.0000
Mean        -65.0661       -67.0406
100         S.D          6.5688         2.7298
t-test        0.0000         0.0000

Mean       -608.5168      -608.5186
10          S.D          0.0021         0.0008
t-test        0.0000         0.0000
Mean       -3318.3868     -3320.1156
Egg-holder       50          S.D          1.5810         0.5672
t-test        0.0000         0.0000
Mean       -6672.6025     -6694.8362
100         S.D          9.4819         4.4452
t-test        0.0000         0.0000

Mean       -4020.3795     -4062.8059
10          S.D         137.7577       114.5713
t-test        0.0000         0.0000
Mean      -15391.3838    -14734.6805
Schwefel         50          S.D         849.1259       822.2196
t-test        0.0000         0.0000
Mean      -24086.2209    -23310.1510
100         S.D        1712.0336      1468.7940
t-test        0.0000         0.0000

Selection schemes

Benchmark     Dimension        RS            LRS            SWS

2.5206E + 01   3.0214E + 01   2.5657E + 01
10       5.4125E + 00   5.4887E + 00   5.1936E + 00
0.74302        0.01616
2.8608E + 02   3.2635E + 02   2.3568E + 02
Beale            50       3.9857E + 01   5.1910E + 01   2.5334E + 01
0.00000        0.00000
9.7269E + 02   1.0248E + 03   6.6155E + 02
100      1.5325E + 02   1.7469E + 02   4.3353E + 01
0.00000        0.00000

2.2209E--01    3.8920E--01    5.0851E--07
10       4.9787E--01    8.0708E--01    3.6725E--07
0.01762        0.00000
92.1134        93.8072        17.8370
Bohachevsky      50         26.7442        31.6492         3.3338
0.0000         0.0000
267.6916       274.2843       97.8563
100        49.6698        45.7011        13.6525
0.0000         0.0000

-8.4403        -8.4322        -4.4669
10          0.1480         0.3682         0.5363
0.0000         0.0000
-37.5574       -37.2521       -17.9598
Drop-wave        50          1.8322         1.9668         1.9511
0.0000         0.0000
-66.3824      - 65.7157       -35.1363
100         2.7023         4.8334         4.9423
0.0000         0.0000

-608.5177      -608.5087      -413.0947
10          0.0017         0.0020        21.3483
0.0000         0.0000
-3318.4740     -3317.6446     -1875.4208
Egg-holder       50          1.2800         1.4306        200.0058
0.0000         0.0000
-6678.1726     -6674.6017     -3541.7138
100         8.4881         8.9851        251.8305
0.0000         0.0000

-4083.1594     -4065.7950     -2898.0973
10         125.9619       117.8343       249.1698
0.0000         0.0000
-15825.5138    -15622.4744     -8337.1457
Schwefel         50         766.8707       793.9728       785.8473
0.0000         0.0000
-25169.1100    -24641.6910    -11872.9339
100       1456.0778      1570.0301      1476.5720
0.0000         0.0000

Table 6: Cumulative results of best selection techniques.

Functions                           Dimensions

10                   50

Axis parallel hyper   2.9418E--07 (SWS)   1.7630E + 01 (SWS)
ellipsoid
Beale                   23.5433 (TS)        216.2626 (TS)
Bohachevsky           5.0851E--07 (SWS)     17.8370 (SWS)
Colville                1.3926 (SWS)        606.8044 (SWS)
Drop-wave               -4.4669 (SWS)       -17.9598 (SWS)
Egg-holder             -413.0947 (SWS)     -1875.4208 (SWS)
Ellipsoidal           3.3639E--07 (SWS)     652.7483 (SWS)
Rosenbrook              6.4416 (SWS)        241.8055 (SWS)
Schaffer                4.1455 (SWS)        22.2471 (SWS)
Schwefel              -2898.0973 (SWS)     -8337.1457 (SWS)

Functions                 Dimensions

100

Axis parallel hyper   7.0599E + 02 (SWS)
ellipsoid
Beale                   607.0514 (TS)
Bohachevsky             97.8563 (SWS)
Colville               5940.3560 (SWS)
Drop-wave               -35.1363 (SWS)
Egg-holder             -3541.7138 (SWS)
Ellipsoidal            31357.7650 (SWS)
Rosenbrook             1190.4242 (SWS)
Schaffer                45.6164 (SWS)
Schwefel              -11872.9339 (SWS)

Figure 3: Comparative charts of selection scheme:
(a) RWS, (b) LRS, (c) TS, and (d) SWS.

(a)

1     0%
2     1%
3     2%
4     4%
5     5%
6     6%
7    11%
8    17%
9    25%
10   29%

(b)

1     9%
2     9%
3     3%
4    10%
5    10%
6    10%
7    10%
8    11%
9    11%
10   11%

(c)

1     1%
2     3%
3     5%
4     7%
5     9%
6    11%
7    13%
8    15%
9    17%
10   19%

(d)

1     2%
2     3%
3     6%
4     9%
5     9%
6    11%
7    12%
8    13%
9    17%
10   18%

Note: Table made from pie chart.
```