# Multi-population group search optimization for function optimization.

INTRODUCTION

Several bio-inspired population based methods like Genetic Algorithm (GA), Differential Evolution (DE), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been applied to solve a great variety of optimization problems. Recently, the group search optimization (GSO), a novel algorithm for optimization based on Producer scrounger model has caught the attention of researchers proving its robustness and effectiveness in solving high-dimensional problems (S. He, Q. H. Wu, J. R. Saunders, 2006) (S. He et al., 2009). GSO Algorithm is used in several practical applications like control systems (S. He and X. Li, 2008) healthcare (S. He et al., 2009) (Dattatraya Magatrao et al.,2013), classification, etc. GSO is a nature inspired algorithm based on thesearching and foraging mechanism of animals. Here the natural phenomenon is adopted to define producers, scroungers and rangers. Producers are those members which scan for the optimal solutions in the nearby region. Scroungers join the producers in the search of solutions. Rangers perform random walk to escape the local minima.

Traditional GSO population involves one producer, almost 80% scroungers and rest act as rangers. In (A. B. M. Junaed et al., 2013), the authors proposed a multi producer version of GSO which was shown to possess increased performance than the original GSO. Some modifications and variations on the basic algorithm of GSO have been proposed by different researchers to improve the performance of GSO algorithm and also to make it suitable for the performance of a particular task. In (X. Yan and H. Shi, 2011) the group search particle swarm optimization (GSPSO), the Particle Swarm Optimization (PSO) model is used to find a good local search space, which the global optimization point is contained with high probability, and the GSO model is used to search in the local search space. A mutual rescue method is also proposed for switching between the two models. In (Ling Wang et al., 2012), a novel multi-objective group search optimizer (NMGSO) replace the scanning strategy in GSO with the limited pattern search procedure to simplify the computation and improve the performance. In this paper we propose a different flavor of GSO where multiple sub-populations are used which gives better function optimization than the traditional GSO.The GSO algorithm proceeds independently for each sub-population and occasionally individuals migrate between groups.

We have shown that this proposed MPGSO algorithm performs well when compared to GA, PSO and traditional GSO. The organization of the paper is as follows: Section-1 discusses the basic GSO, section-2 explains the proposed algorithm namely MPGSO. The experimental results of the proposed method in comparison with GA, PSO and basic GSO are given in section-3.Finally section-4 draws the conclusion.

1. Group Search Optimizer:

GSO is a population based optimization algorithm based on Producer-Scrounger (PS) behavior of animals (S. He, Q. H. Wu, J. R. Saunders, 2006) (S. He et al, 2009). Here in GSO, each population is called a group and each individual in the population constitutes a member. In an n-dimensional search space, the ith member at the kth searching iteration has a current position [X.sup.k.sub.i] [member of] R, a head angle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The search direction of theith member, which is a unit vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] can be calculated from [[phi].sup.k.sub.i] via a polar to Cartesian coordinate transformation (D. Mustard).

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

GSO has three types of members in a group: producers, scroungers and rangers. At each iteration the group member which has the best fitness value is selected as the producer. There is only one producer and remaining members are either scroungers or rangers. All scroungers will move towards the resource found by the producer. Rangers are less efficient and perform random walks.

Producers scans the environment to find resources. At the kth iteration the producer [X.sub.p] behaves as follows:

1) The producer will scan at zero degree and then scan laterally by randomly sampling three points in the scanning field W. J. O'Brien [10]: one point at zero degree (eq. 2), one point in the right hand side hypercube (eq. 3) and one point in the left hand side hypercube (eq. 4)

[X.sub.z] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k]) (2)

[X.sub.r] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k] + [r.sub.2][[theta].sub.max]/2) (3)

[X.sub.l] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k] - [r.sub.2][[theta].sub.max]/2) (4)

Where [r.sub.1][member of][R.sup.1] is a normally distributed random number with mean 0 and standard deviation 1 and [r.sub.2] [member of][R.sup.n-1] is a random sequence in the range (0,1).The [[theta].sub.max] is max-pursuit angle, and the [l.sub.max] is max-pursuit distance:

[l.sub.max] = [parallel]U-L[parallel] = [square root of ([[SIGMA].sup.n.sub.j=1][([U.sub.j] - [L.sub.j]).sup.2])] (5)

Where [U.sub.j] and [L.sub.j] are the upper bound and lower bound of the search range

2) A best point with best resource (fitness value) is searched by the producer. The producer moves to the best point if it has a better resource than its current position otherwise it stays in its current position and turn its head according to eq. 6:

[[phi].sup.k+1] = [[phi].sup.k] + [r.sub.2] [[alpha].sub.max] (6)

Where [[alpha].sub.max] is the maximum turning angle.

3) If the producer cannot find a better area after a iterations, it will turn its head back to zero degree:

[[phi].sup.k+a] = [[phi].sup.k] (7)

Where 'a' is a constant given by round ([square root of n + 1]).

All scroungers will join the resource found by the producer, performing scrounging strategy:

[X.sup.k+1.sub.i] = [X.sup.k.sub.i] + [r.sub.3] o ([X.sup.k.sub.p] - [X.sup.k.sub.i]) (8)

Where [r.sub.3][member of][R.sup.n] is a uniform random sequence in the range (0, 1) and 'o' calculates the entry wise product of the two vectors. A scrounger becomes a producer in the next iteration if it is able to find a better location than the current producer and other scroungers. The group members, who are less efficient foragers than the dominant, will be dispersed from the group. If the ith group member is dispersed, it will perform ranging. At the kth iteration, it generates a random head angle [[phi].sub.i] using (eq. 6); and then it chooses a random distance

[l.sub.i] = a x [r.sub.1][l.sub.max] (9)

and move to the new point

[X.sup.k+1.sub.i] = [X.sup.k.sub.i] + [l.sub.i][D.sup.k.sub.i] ([[phi].sup.k+1]) (10)

2. Proposed MPGSO Algorithm:

This section describes the proposed multi-population GSO (MPGSO) based on the concept of multiple populations. In MPGSO, first we calculate the fitness value of each member of the population and sort the population according to the fitness value. The population members are divided in k independent groups. Each group is associated with d dimensions from the search space. Each group performs the following operations: choose a producer, perform producing, perform scrounging, and perform ranging. In other words, an instance of GSO runs on every independent group. In this approach, each group's scrounger will follow the best member of the group when performing scrounging. Combine all the groups again and repeat the whole process. The pseudo code for the MPGSO algorithm is presented in Algorithm 1.

3. Experimental Studies:

A. Benchmark Problems and Exper/mental Setup:

In order to demonstrate the performance of the proposed MPGSO algorithm 20 benchmark problems are considered where the dimension of functions are varied from 2 to 30 to give a diverse test bed. Three different kinds of functions are selected: unimodal functions ([f.sub.1] to [f.sub.6]), multimodal functions ([f.sub.7] to [f.sub.11]), low-dimensional multimodal functions ([f.sub.12] to [f.sub.20]). These functions are listed in Table I.

The algorithms are implemented in MATLAB environment. The experiments have been done on a PC (Intel(R) Core(TM) i3-2310M CPU @2.10GHz, 3GB RAM) with Windows 7 OS.

The initial head angle [[phi].sup.0] of each individual is set to be ([pi]/4,...., [pi]/4). The constant 'a' is given by round ([square root of n + 1]) where n is the dimension of the search space. The maximum pursuit angle [[theta].sub.max] is [pi]/[a.sup.2]. The maximum turning angle [[alpha].sub.max] is set to be [[theta].sub.max]/2. The maximum pursuit distance [l.sub.max] is calculated from the eq.5.The population size is 48 and hundred times create six population groups over the initial population. ForMPGSO, 80% members were selected as scroungers. The parameters setting for all algorithms are summarized in Table II.

B. Experimental Results and Analysis:

In this section, we compare the proposed MPGSO with the GA, PSO and standard GSO algorithms. The maximum number of function evaluations (termination condition) allowed is 3,000. Table III shows the results for all test functions of the MPGSO and other three algorithms taken as the mean value of all the functions. The best values according to the empirical analysis are bolded. The empirical analysis showed that the proposed MPGSO method achieved a better performance than standard GSO for twelve of the twenty ([f.sub.3], [f.sub.5] to [f.sub.7], [f.sub.10], [f.sub.11], [f.sub.14] and [f.sub.16] to [f.sub.20]) functions. Three functions ([f.sub.12], [f.sub.13], [f.sub.15]) gave the same result. It also outperformed PSO for fifteen functions ([f.sub.5] to [f.sub.8] and [f.sub.10] to [f.sub.20]) and GA for nineteen of the twenty functions ([f.sub.1] to [f.sub.7] and A to [f.sub.20]).

4. Conclusion:

In this study we have modified the Group Search Optimizer (GSO) using multiple population, to solve function optimization problems. The MPGSO is evaluated on twenty well known benchmark functions for 2 to 30-dimension search space problems and compared to standard GSO, PSO and GA algorithms. Experimental results show that the MPGSO can achieve better results than standard GSO and the other tested algorithms for most of the cases.

ARTICLE INFO

Article history:

Received 12 October 2014

Received in revised form 26 December 2014

Accepted 1 January 2015

Available online 25 February 2015

REFERENCES

Junaed, A.B.M., M.A.H. Akhand, Al-Mahmud, K. Murase, 2013. Multi-Producer Group Search Optimizer for Function Optimization. International Conference on Informatics, Electronics & Vision (ICIEV), IEEE, 1-4.

Dattatraya Magatrao, Shameek Ghosh, V.K. Jayaraman, Patrick Siarry, 2013. Simultaneous Gene Selection and Cancer Classification using a Hybrid Group Search Optimizer, GECCO', 13.

Mustard, D., Numerical integration over the n-dimensional spherical shell. Math. Comput, 18(88): 578-589.

Ling Wang, Xiang Zhong, Min Liu, 2012. A novel group search optimizer for multi-objective optimization. Expert Systems with Applications, 39: 2939-2946.

He, S., Q.H. Wu, J.R. Saunders, 2006. A novel group search optimizer inspired by animal behavioural ecology. IEEE international conference on evolutionary computation. BC, Canada, 1272-1278.

He, S., Q.H. Wu and J.R. Saunders, 2009. Group Search Optimizer: An Optimization Algorithm Inspired by Animal Searching Behaviour. IEEE Transactions on Evolutionary Computation, 13(5): 973-990.

He, S., Q.H. Wu and J.R. Saunders, 2009. Breast Cancer Diagnosis Using an Artificial Neural Network Trained by Group Search Optimizer. Transactions of the Institute of Measurement and Control, 31(6): 517-531.

He, S., and X. Li, 2008. Application of a group search optimization based artificial neural network to machine condition monitoring. Emerging Technologies and Factory Automation, IEEE Inter-national Conference, 1260-1266.

O'Brien, W. J., B.I. Evans and G.L. Howick, 1986.A new view of the predation cycle of a planktivorous fish, white crappie (pomoxisannularis).Can. J. Fish.Aquat.Sci., 43: 1894-1899.

Yan, X. and H. Shi, 2011. A Hybrid Algorithm Based on Particle Swarm Optimization and Group Search Optimization. Seventh Conference on Natural Computing (ICNC), IEEE, 1: 13-17.

(1) Joshil Raj, (2) S. SivaSathya and (3) Sebabrata Ghosh

(1) Ph.D Scholar, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

(2) Associate Professor, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

(3) M. Tech Student, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

Corresponding Author: Joshil Raj, Ph.D Scholar, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

Several bio-inspired population based methods like Genetic Algorithm (GA), Differential Evolution (DE), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been applied to solve a great variety of optimization problems. Recently, the group search optimization (GSO), a novel algorithm for optimization based on Producer scrounger model has caught the attention of researchers proving its robustness and effectiveness in solving high-dimensional problems (S. He, Q. H. Wu, J. R. Saunders, 2006) (S. He et al., 2009). GSO Algorithm is used in several practical applications like control systems (S. He and X. Li, 2008) healthcare (S. He et al., 2009) (Dattatraya Magatrao et al.,2013), classification, etc. GSO is a nature inspired algorithm based on thesearching and foraging mechanism of animals. Here the natural phenomenon is adopted to define producers, scroungers and rangers. Producers are those members which scan for the optimal solutions in the nearby region. Scroungers join the producers in the search of solutions. Rangers perform random walk to escape the local minima.

Traditional GSO population involves one producer, almost 80% scroungers and rest act as rangers. In (A. B. M. Junaed et al., 2013), the authors proposed a multi producer version of GSO which was shown to possess increased performance than the original GSO. Some modifications and variations on the basic algorithm of GSO have been proposed by different researchers to improve the performance of GSO algorithm and also to make it suitable for the performance of a particular task. In (X. Yan and H. Shi, 2011) the group search particle swarm optimization (GSPSO), the Particle Swarm Optimization (PSO) model is used to find a good local search space, which the global optimization point is contained with high probability, and the GSO model is used to search in the local search space. A mutual rescue method is also proposed for switching between the two models. In (Ling Wang et al., 2012), a novel multi-objective group search optimizer (NMGSO) replace the scanning strategy in GSO with the limited pattern search procedure to simplify the computation and improve the performance. In this paper we propose a different flavor of GSO where multiple sub-populations are used which gives better function optimization than the traditional GSO.The GSO algorithm proceeds independently for each sub-population and occasionally individuals migrate between groups.

We have shown that this proposed MPGSO algorithm performs well when compared to GA, PSO and traditional GSO. The organization of the paper is as follows: Section-1 discusses the basic GSO, section-2 explains the proposed algorithm namely MPGSO. The experimental results of the proposed method in comparison with GA, PSO and basic GSO are given in section-3.Finally section-4 draws the conclusion.

1. Group Search Optimizer:

GSO is a population based optimization algorithm based on Producer-Scrounger (PS) behavior of animals (S. He, Q. H. Wu, J. R. Saunders, 2006) (S. He et al, 2009). Here in GSO, each population is called a group and each individual in the population constitutes a member. In an n-dimensional search space, the ith member at the kth searching iteration has a current position [X.sup.k.sub.i] [member of] R, a head angle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The search direction of theith member, which is a unit vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] can be calculated from [[phi].sup.k.sub.i] via a polar to Cartesian coordinate transformation (D. Mustard).

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

GSO has three types of members in a group: producers, scroungers and rangers. At each iteration the group member which has the best fitness value is selected as the producer. There is only one producer and remaining members are either scroungers or rangers. All scroungers will move towards the resource found by the producer. Rangers are less efficient and perform random walks.

Producers scans the environment to find resources. At the kth iteration the producer [X.sub.p] behaves as follows:

1) The producer will scan at zero degree and then scan laterally by randomly sampling three points in the scanning field W. J. O'Brien [10]: one point at zero degree (eq. 2), one point in the right hand side hypercube (eq. 3) and one point in the left hand side hypercube (eq. 4)

[X.sub.z] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k]) (2)

[X.sub.r] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k] + [r.sub.2][[theta].sub.max]/2) (3)

[X.sub.l] = [X.sup.k.sub.p] + [r.sub.1] [l.sub.max] [D.sup.k.sub.p] ([[phi].sup.k] - [r.sub.2][[theta].sub.max]/2) (4)

Where [r.sub.1][member of][R.sup.1] is a normally distributed random number with mean 0 and standard deviation 1 and [r.sub.2] [member of][R.sup.n-1] is a random sequence in the range (0,1).The [[theta].sub.max] is max-pursuit angle, and the [l.sub.max] is max-pursuit distance:

[l.sub.max] = [parallel]U-L[parallel] = [square root of ([[SIGMA].sup.n.sub.j=1][([U.sub.j] - [L.sub.j]).sup.2])] (5)

Where [U.sub.j] and [L.sub.j] are the upper bound and lower bound of the search range

2) A best point with best resource (fitness value) is searched by the producer. The producer moves to the best point if it has a better resource than its current position otherwise it stays in its current position and turn its head according to eq. 6:

[[phi].sup.k+1] = [[phi].sup.k] + [r.sub.2] [[alpha].sub.max] (6)

Where [[alpha].sub.max] is the maximum turning angle.

3) If the producer cannot find a better area after a iterations, it will turn its head back to zero degree:

[[phi].sup.k+a] = [[phi].sup.k] (7)

Where 'a' is a constant given by round ([square root of n + 1]).

All scroungers will join the resource found by the producer, performing scrounging strategy:

[X.sup.k+1.sub.i] = [X.sup.k.sub.i] + [r.sub.3] o ([X.sup.k.sub.p] - [X.sup.k.sub.i]) (8)

Where [r.sub.3][member of][R.sup.n] is a uniform random sequence in the range (0, 1) and 'o' calculates the entry wise product of the two vectors. A scrounger becomes a producer in the next iteration if it is able to find a better location than the current producer and other scroungers. The group members, who are less efficient foragers than the dominant, will be dispersed from the group. If the ith group member is dispersed, it will perform ranging. At the kth iteration, it generates a random head angle [[phi].sub.i] using (eq. 6); and then it chooses a random distance

[l.sub.i] = a x [r.sub.1][l.sub.max] (9)

and move to the new point

[X.sup.k+1.sub.i] = [X.sup.k.sub.i] + [l.sub.i][D.sup.k.sub.i] ([[phi].sup.k+1]) (10)

2. Proposed MPGSO Algorithm:

This section describes the proposed multi-population GSO (MPGSO) based on the concept of multiple populations. In MPGSO, first we calculate the fitness value of each member of the population and sort the population according to the fitness value. The population members are divided in k independent groups. Each group is associated with d dimensions from the search space. Each group performs the following operations: choose a producer, perform producing, perform scrounging, and perform ranging. In other words, an instance of GSO runs on every independent group. In this approach, each group's scrounger will follow the best member of the group when performing scrounging. Combine all the groups again and repeat the whole process. The pseudo code for the MPGSO algorithm is presented in Algorithm 1.

3. Experimental Studies:

A. Benchmark Problems and Exper/mental Setup:

In order to demonstrate the performance of the proposed MPGSO algorithm 20 benchmark problems are considered where the dimension of functions are varied from 2 to 30 to give a diverse test bed. Three different kinds of functions are selected: unimodal functions ([f.sub.1] to [f.sub.6]), multimodal functions ([f.sub.7] to [f.sub.11]), low-dimensional multimodal functions ([f.sub.12] to [f.sub.20]). These functions are listed in Table I.

Algorithm 1: Pseudo code for the MPGSO algorithm. Begin Randomly initialize positions [X.sub.i] and head angles [[phi].sub.i] of all members. Calculate the fitness values of initial members: f([X.sub.i]). While termination conditions are not met do Sort the population according to the fitness value. Create m population group according to the fitness value. //m = 6 For each groups do While n iteration do For each members i in the group do Select Producer: Find the producer of the group. Perform producing: (i) Scan at three random points in the scanning field using eq. (2) to 4). (ii) Move to the best point if it has a better resource, otherwise stay at the current position and turn the head to a new angle using eq. (6). (iii) If a better area cannot be found then turn the head back to zero degree using (eq.7). Perform scrounging: Select a number of group members (normally 80% of the members) as scroungers. Then perform scrounging using (eq.8). Perform ranging: The rest of the members (rangers) will perform ranging. Calculate the fitness value of each member from this group. End For End While End For Perform regrouping: Combine all the groups with the new fitness value of each member and regroup. End While End

The algorithms are implemented in MATLAB environment. The experiments have been done on a PC (Intel(R) Core(TM) i3-2310M CPU @2.10GHz, 3GB RAM) with Windows 7 OS.

The initial head angle [[phi].sup.0] of each individual is set to be ([pi]/4,...., [pi]/4). The constant 'a' is given by round ([square root of n + 1]) where n is the dimension of the search space. The maximum pursuit angle [[theta].sub.max] is [pi]/[a.sup.2]. The maximum turning angle [[alpha].sub.max] is set to be [[theta].sub.max]/2. The maximum pursuit distance [l.sub.max] is calculated from the eq.5.The population size is 48 and hundred times create six population groups over the initial population. ForMPGSO, 80% members were selected as scroungers. The parameters setting for all algorithms are summarized in Table II.

B. Experimental Results and Analysis:

In this section, we compare the proposed MPGSO with the GA, PSO and standard GSO algorithms. The maximum number of function evaluations (termination condition) allowed is 3,000. Table III shows the results for all test functions of the MPGSO and other three algorithms taken as the mean value of all the functions. The best values according to the empirical analysis are bolded. The empirical analysis showed that the proposed MPGSO method achieved a better performance than standard GSO for twelve of the twenty ([f.sub.3], [f.sub.5] to [f.sub.7], [f.sub.10], [f.sub.11], [f.sub.14] and [f.sub.16] to [f.sub.20]) functions. Three functions ([f.sub.12], [f.sub.13], [f.sub.15]) gave the same result. It also outperformed PSO for fifteen functions ([f.sub.5] to [f.sub.8] and [f.sub.10] to [f.sub.20]) and GA for nineteen of the twenty functions ([f.sub.1] to [f.sub.7] and A to [f.sub.20]).

4. Conclusion:

In this study we have modified the Group Search Optimizer (GSO) using multiple population, to solve function optimization problems. The MPGSO is evaluated on twenty well known benchmark functions for 2 to 30-dimension search space problems and compared to standard GSO, PSO and GA algorithms. Experimental results show that the MPGSO can achieve better results than standard GSO and the other tested algorithms for most of the cases.

ARTICLE INFO

Article history:

Received 12 October 2014

Received in revised form 26 December 2014

Accepted 1 January 2015

Available online 25 February 2015

REFERENCES

Junaed, A.B.M., M.A.H. Akhand, Al-Mahmud, K. Murase, 2013. Multi-Producer Group Search Optimizer for Function Optimization. International Conference on Informatics, Electronics & Vision (ICIEV), IEEE, 1-4.

Dattatraya Magatrao, Shameek Ghosh, V.K. Jayaraman, Patrick Siarry, 2013. Simultaneous Gene Selection and Cancer Classification using a Hybrid Group Search Optimizer, GECCO', 13.

Mustard, D., Numerical integration over the n-dimensional spherical shell. Math. Comput, 18(88): 578-589.

Ling Wang, Xiang Zhong, Min Liu, 2012. A novel group search optimizer for multi-objective optimization. Expert Systems with Applications, 39: 2939-2946.

He, S., Q.H. Wu, J.R. Saunders, 2006. A novel group search optimizer inspired by animal behavioural ecology. IEEE international conference on evolutionary computation. BC, Canada, 1272-1278.

He, S., Q.H. Wu and J.R. Saunders, 2009. Group Search Optimizer: An Optimization Algorithm Inspired by Animal Searching Behaviour. IEEE Transactions on Evolutionary Computation, 13(5): 973-990.

He, S., Q.H. Wu and J.R. Saunders, 2009. Breast Cancer Diagnosis Using an Artificial Neural Network Trained by Group Search Optimizer. Transactions of the Institute of Measurement and Control, 31(6): 517-531.

He, S., and X. Li, 2008. Application of a group search optimization based artificial neural network to machine condition monitoring. Emerging Technologies and Factory Automation, IEEE Inter-national Conference, 1260-1266.

O'Brien, W. J., B.I. Evans and G.L. Howick, 1986.A new view of the predation cycle of a planktivorous fish, white crappie (pomoxisannularis).Can. J. Fish.Aquat.Sci., 43: 1894-1899.

Yan, X. and H. Shi, 2011. A Hybrid Algorithm Based on Particle Swarm Optimization and Group Search Optimization. Seventh Conference on Natural Computing (ICNC), IEEE, 1: 13-17.

(1) Joshil Raj, (2) S. SivaSathya and (3) Sebabrata Ghosh

(1) Ph.D Scholar, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

(2) Associate Professor, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

(3) M. Tech Student, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

Corresponding Author: Joshil Raj, Ph.D Scholar, Dept of Computer Science and Engineering, Pondicherry University, Pondicherry, India.

Table I: Twenty benchmark function, where n is the dimension of the function, [f.sub.min] is the minimum value of the function and S[subset or equal to] [R.sup.n] Function Name Functions, f Sphere [f.sub.1] (x) = [[SIGMA].sup.n.sub.i=1] [x.sup.2.sub.i] Schwefe2.22 [f.sub.2] (x) = [[SIGMA].sup.n.sub.i=1] [absolute value of [x.sub.i]] + [[PI].sup.n.sub.i=1] [absolute value of [x.sub.i]] Schwefe 1.2 [f.sub.3] = [[SIGMA].sup.n.sub.i=1] [([[SIGMA]. sup.i.sub.j=1] [x.sub.j]).sup.2] Schwefe2.21 [f.sub.4] (x) = [max.sub.i] {[absolute value of [x.sub.i]], 1 [less than or equal to] i [less than or equal to] n} Rosenbrock [f.sub.5] (x) = [[SIGMA].sup.n-1.sub.i=1] (100 [([x.sub.i+1] - [x.sup.2.sub.i]).sup.2] + [([x.sub.i] - 1)).sup.2] Step [f.sub.6] (x) = [[SIGMA].sup.n.sub.i=1] [([[x.sub.i] + 0.5]).sup.2] Schwefe2.26 [f.sub.7] (x) = -[[SIGMA].sup.n.sub.i=1] Rastrigin [f.sub.8](x) = [[SIGMA].sup.n.sub.i=1] ([x.sup.2.sub.1]) - 10 cos [(2[pi][x.sub.i]) + 10).sup.2] Ackley [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Griewank [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Penalized [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Shekel Foxholes [f.sub.12](x) = [1/500 + [[SIGMA].sup.25.sub.j=1] 1/j + [[SIGMA].sup.2.sub.i=1] [([x.sub.i] - [a.sub.ij]).sup.6].sup.-1] Hump [f.sub.13] (x) = 4[x.sup.2.sub.1] - 2.1[x.sup.4.sub.1] + 1/3[x.sup.6.sub.1] + [x.sub.1][x.sub.2] - [x.sup.2.sub.2] + 4[x.sup.4.sub.2] Branin [f.sub.14] (x) = ([[x.sub.2] - 5.1/4[[pi].sup.2] [x.sup.2.sub.1] + 5/[pi][x.sub.1] - 6).sup.2] + 10 (1 - 1/8[pi])[cosx.sub.1] + 10 Goldstein-Price [f.sub.15] (x) = [1 + [([x.sub.1] + [x.sub.2] + 1). sup.2] (19 - 14[x.sub.1] + 3[x.sup.2.sub.1] - 14[x.sub.2] + 6[x.sub.1][x.sub.2] + 3[x.sup.2.sub.2])] x [30 + [(2x + 1 - 3[x.sub.2]).sup.2](18 - 32[x.sub.1] + 12[x.sup.2.sub.1] + 48[x.sub.2] - 36[x.sub.1] [x.sub.2] + 27[x.sup.2.sub.2])] Hartman 3 [f.sub.16] (x) - [[SIGMA].sup.4.sub.i=1] [c.sub.i] exp [- [[SIGMA].sup.3.sub.j=1] [a.sub.ij] [([x.sub.j] - [p.sub.ij]).sup.2 Hartman 6 [f.sub.17] (x) - [[SIGMA].sup.4.sub.i=1] [c.sub.i] exp [-[[SIGMA].sup.6.sub.j=1] [a.sub.ij] [([x.sub.j] - [p.subij]).sup.2] Shekel 5 [f.sub.18] (x) = [[SIGMA].sup.5.sub.i=1] [[(x - [a.sub.i]] [(x - [a.sub.i]).sup.T] + [c.sub.i].sup.-1] Shekel 7 [f.sub.19] (x) = [[SIGMA].sup.7.sub.i=1] [[(x - [a.sub.i]] [(x - [a.sub.i]).sup.T] + [c.sub.i].sup.-1] Shekel 10 [f.sub.20] (x) = [[SIGMA].sup.10.sub.i=1] [[(x - [a.sub.i]] [(x - [a.sub.i]).sup.T] + [c.sub.i].sup.-1] Function Name N S Sphere 30 [[-100,100].sup.n] 0 Schwefe2.22 30 [[-10,10].sup.n] 0 Schwefe 1.2 30 [[-100,100].sup.n] 0 Schwefe2.21 30 [[-100,100].sup.n] 0 Rosenbrock 30 [[-30,30].sup.n] 0 Step 30 [[-100,100].sup.n] 0 Schwefe2.26 30 [[-500,500].sup.n] -12569.5 Rastrigin 30 [[-5.12,5.12].sup.n] 0 Ackley 30 [[-32,32].sup.n] 0 Griewank 30 [[-600,600].sup.n] 0 Penalized 30 [[-50,50].sup.n] 0 Shekel Foxholes 2 [[-65.536,65.536].sup.n] 1 Hump 2 [[-5,5].sup.n] -1.0316285 Branin 2 [-5,10] x [0,15] 0.398 Goldstein-Price 2 [[-2,2].sup.n] 3 Hartman 3 3 [[0,1].sup.n] -3.86278 Hartman 6 6 [[0,1].sup.n] -3.32237 Shekel 5 4 [[0,10].sup.n] -10.1532 Shekel 7 4 [[0,10].sup.n] -10.4029 Shekel 10 4 [[0,10].sup.n] -10.5364 Table II: Algorithm Parameter value GSO Scrounger Percentage 80% [[theta].sub.max] [pi]/[a.sup.2] [[alpha].sub.max] [[theta].sub.max]/2 [[phi].sup.0] [pi]/4 Population Size 48 GA Crossover Rate 0.9 Mutation Rate 0.1 Population Size 50 PSO [omega] Starting at 0.9 & ending at 0.4 [c.sub.1] 2.0 [c.sub.2] 2.0 Population Size 50 MPGSO Scrounger Percentage 80% [[theta].sub.max] [pi]/[a.sup.2] [[alpha].sub.max] [[theta].sub.max]/2 [[phi].sup.0] [pi]/4 Number of Groups 6 Steps 100 Population Size 48 Table III: Function GA PSO F1 3.1711 3.6927x[10.sup.-37] F2 0.5771 2.9168x[10.sup.-24] F3 9749.9145 1.1979x[10.sup.-3] F4 7.9610 0.4123 F5 338.5616 37.3582 F6 3.6970 0.1460 F7 -12566.0977 -9659.6993 F8 0.6509 20.7863 F9 0.8678 1.3404x[10.sup.-3] F10 1.0038 0.2323 F11 0.1681 5.0519x[10.sup.-2] F12 0.9989 1.0239 F13 -1.0298 -1.0160 F14 0.4040 0.44040 F15 7.5027 3.0050 F16 -3.8624 -3.8582 F17 -3.2627 -3.1846 F18 -5.1653 -7.5439 F19 -5.4432 -8.9439 F20 -4.9108 -8.9439 Function GSO MPGSO F1 1.9481x[10.sup.-8] 1.609448x[10.sup.-3] F2 3.7039x[10.sup.-5] 1.701641x[10.sup.-3] F3 5.7829 3.787010x[10.sup.-2] F4 0.1078 5.358333 F5 49.8359 17.71832 F6 1.6000x[10.sup.-2] 1.080258x[10.sup.-3] F7 -12569.4882 -12569.59 F8 1.0179 2.161355 F9 2.6548x[10.sup.-5] 2.067166x[10.sup.-2] F10 3.0792x[10.sup.-2] 1.206002x[10.sup.-3] F11 4.6948x[10.sup.-5] 2.330103x[10.sup.-5] F12 0.9980 0.9980 F13 -1.031628 -1.031628 F14 0.3979 0.3978881 F15 3.0 3.00 F16 -3.8628 -3.863023 F17 -3.2697 -3.321587 F18 -6.09 -10.15320 F19 -7.4022 -10.40292 F20 -7.4022 -10.53209

Printer friendly Cite/link Email Feedback | |

Author: | Raj, Joshil; SivaSathya, S.; Ghosh, Sebabrata |
---|---|

Publication: | Advances in Natural and Applied Sciences |

Article Type: | Report |

Date: | Jun 1, 2015 |

Words: | 3462 |

Previous Article: | A comparative study of two color based image segmentation for citrus recognition. |

Next Article: | Vision based recognition of American sign language numbers using combined DCT-DWT features and machine learning. |

Topics: |