Printer Friendly

Improved Particle Swarm Optimization Algorithm Based on Last-Eliminated Principle and Enhanced Information Sharing.

1. Introduction

The development of industrial society has led to the successful application of the optimal design methods to diverse engineering practices, such as path planning, structural design, control theory, and control engineering [1-10]. In 1995, the foraging behavior of bird swarm inspired Kennedy and Eberhart to propose the particle swarm optimization (PSO) algorithm. PSO requires few parameter adjustments and is easy to implement; hence, it is the most commonly used swarm intelligence algorithm [11-20]. However, in practical applications, most problems are complicated design problems with multiple parameters, strong coupling, and nonlinearity. Therefore, improving the global optimization capability of an optimization algorithm is important in solving complex engineering optimization problems. To improve the capability of traditional PSO, many scholars have proposed improvement strategies, including the adjustment of parameters and combinations of various mechanisms.

Shi and Eberhant [21] proposed an inertial weight improvement strategy (SPSO) with strong global search capability at the beginning of an iteration, strong local search capability in the latter iteration, and fine search near the optimal solution. Although the SPSO improves the convergence speed of the algorithm, the "premature" phenomenon remains. Zhang [22] proposed an improved PSO algorithm with adaptive inertial weight that is based on Bayesian technology to balance the development and exploration capability of populations. Ratnawecra [23] proposed a linear adjustment method for learning factors. In the early stages of the iteration, the particle flight was mainly based on the historical information of the particle itself, and the latter particle flight was mainly based on the social information between the particle and the global optimal particle. However, this method still has defects. The best fit for the initial global search is similar to the local optimum. Moreover, convergence is only limited to some optimal regions rather than globally, thereby causing the PSO algorithm to fall into the local extrema. Chen and Ke [24] proposed a chaotic dynamic weight (CDW) PSO (CDWPSO) algorithm. Chaotic maps and dynamic weights were used to modify the search process. Although CDW-PSO indicates an improved search performance relative to other natural heuristic optimization algorithms, it also easily falls into the local optimum. Chen [25] proposed a dynamic multiswarm differential learning PSO (DMSDL-PSO) algorithm, in which the differential evolution method is applied to each subgroup combined with a differential mutation method to conduct a global search, and a quasi-Newton method is applied for local search. The DMSDL-PSO algorithm has good exploration and exploitation capabilities. Jiang [26] proposed a new binary hybrid PSO with wavelet mutation (HPSOWM), in which the motion mechanism and mutation process of particles are converted into binary elements and the problem is transformed from a continuous space problem into a discrete domain one. Although the convergence speed of the HPSOWM algorithm is stable and robust, its convergence rate is lower than those of other intelligent optimization algorithms. To solve the dynamic multiobjective optimization problem with rapid environmental change, a study proposed a cooperative multiswarm PSO for dynamic multiobjective optimization (CMPSODMO) [27]. In comparison with other dynamic multiobjective optimization algorithms, CMPSODMO indicates a better effect in addressing uncertain rapid environmental changes. Ye [28] proposed a multiswarm PSO algorithm with dynamic learning strategies, in which the population is divided into ordinary and communication particles. The dynamic communication information of communication particles was applied to the algorithm to maintain particle population diversity. Although this method improves the capability of the algorithm to handle complex multimodal functions, it increases the computational complexity of the algorithm. Cui [29] proposed a globally optimal prediction-based adaptive mutation PSO (GPAM-PSO) to avoid the local optimal problem of traditional PSO algorithms. However, GPAM-PSO is limited to the dimensionality reduction of nonzero mean data. Zhang [30] proposed a vector covariance PSO algorithm that divides all the dimensions of a particle into several parts randomly and optimizes each part to enhance the global and local search capabilities. However, the algorithm continues to fall into the local extrema.

PSO has attracted considerable research attention due to its easy implementation, few parameter adjustments, and adaptability. Scholars use PSO to solve engineering optimization problems and gradually penetrate various fields of application, such as parameter optimization, path planning, predictive control, and global optimization. Zhao [31] used PSO to optimize wavelet neural network parameters, reduce the limitations of the assessment of network security situational awareness, and thereby meet the requirements of network security in a big data environment. The parameter-related coefficients in a nonlinear regression analysis model were optimized by combining particle swarm with a genetic phase [32] to reduce the vibrations caused by mine blasting that damages the structures around the blasting area. The derived diffusion-free PSO algorithm was used to estimate the parameters of an infinite impulse response system and improve the energy utilization of an infinite sensor network [33]. Wang [34] used a multiobjective PSO algorithm to solve a path-planning problem of mobile robots in a static rough terrain environment. Wang [35] combined PSO with chaos optimization theory to establish a mathematical model of a path-planning problem in the radioactive environment of nuclear facilities to ensure personnel safety. Lopes [36] proposed a novel particle swarm-based heuristic technique to allocate electrical loads in an industrial setting throughout the day. Multiobjective PSO was used to solve the problem of service allocation in cloud computing [37]. Petrovic [38] proposed chaos PSO to achieve an optimal process dispatching plan. Zhang [39] proposed an adaptive PSO to solve problems in reservoir operation optimization with complex and dynamic nonlinear constraints.

An improved PSO algorithm was used for a time-series prediction of a grayscale model [40]. The algorithm reduces the average relative error between the recovery and measured values of the model to avoid the problems caused by the optimization of background values. Gulcu [41] used PSO to establish a power demand forecasting model.

In view of the aforementioned methods, an improved PSO (IEPSO) algorithm is proposed in the present work. In IEPSO, the last-eliminated principle is used to update the population and maintain particle population diversity. The global search capability of the IEPSO algorithm is improved by adding local-global information sharing terms. A multigroup test function is used for comparison with the IEPSO. A classical optimization algorithm and its improved versions are used to test and verify the global optimization performance of the IEPSO algorithm.

2. IEPSO

2.1. Standard PSO. The initial population of the PSO algorithm is randomized. The IEPSO updates the position and speed of the particle swarm by adaptive learning, as shown in following formulas:

[mathematical expression not reproducible], (1)

where [omega] is the inertial weight, [C.sub.1] and [C.sub.2] are the acceleration terms, [R.sub.1] and [R.sub.2] are the random variables uniformly distributed in the range of (0, 1), [P.sup.t.sub.g] is the global better position, [P.sup.t.sub.i] is the particle that finds the best position in history, [x.sup.d.sub.i] is the particle position in the current iteration, and [v.sup.t+1.sub.id] is the particle update speed at the next iteration.

2.2. IEPSO. The IEPSO algorithm is mainly based on the last-eliminated principle and enhances the local-global information sharing capability to improve its global optimization performance. The specific implementation of the IEPSO algorithm is shown in Figure 1.

The position and velocity of particles in a population are randomly initialized, and the fitness value of the particles is calculated. Information on the current individual and global optimal particles, including their positions and fitness values, is saved. Then, the particle swarm operation is conducted. In the IEPSO algorithm, Formula (2) is used to update the speed to balance the exploration and exploitation capabilities of the particles in the global optimization process. Formula (3) is the local-global information sharing term:

[mathematical expression not reproducible], (2)

[[phi].sub.3] = [C.sub.3][R.sub.3][absolute value of [p.sup.t.sub.gd] - [p.sup.t.sub.id]]

Formula (2) comprises four parts, namely, the inheritance of the previous speed, particle self-cognition, local information sharing, and "local-global information sharing."

The IEPSO algorithm is not limited to one-way communication between global and individual particles. The local-global information sharing term ([[phi].sub.3]) is added to the information exchange between the local optimum and global optimal particles obtained by the current iteration, and the population velocity is updated by Formula (2). In the early stage of the algorithm, the entire search space is searched at a relatively high speed to determine the approximate range of the optimal solution; the result is beneficial for global search. In the latter stage, most of the particle search space is gradually reduced and concentrated in the neighborhood of the optimal value for deep search; the result is beneficial for local search.

The particles that have not exceeded the predetermined range after the speed update continue to retain their original speed. The maximum value of the velocity is assigned to the particle that is beyond the predetermined range after the speed is updated. The particles that have not exceeded the predetermined range after the location update continue to retain their original positions. When the particles are beyond the predetermined range, inferior particles are eliminated by adding new particles to the population within the predetermined range, thereby forming a new population. The fitness value of the new population is recalculated, and the information of the individual particle and its global optimal position and fitness value obtained by the current iteration are preserved. In all the algorithms, particles have good global search capability at the beginning of the iteration, and as individual particles move closer to the local optimal particle, the algorithms gradually lose particle diversity. On the basis of the idea of population variation of the traditional genetic algorithm (GA), the last-eliminated principle is applied in the IEPSO algorithm to maintain particle population diversity. When the PSO satisfies the local convergence condition, the optimal value obtained at this time may be the local optimal value. Particle population diversity is maintained by using the particle fitness function as the evaluation criterion, thereby eliminating particles with poor fitness or high similarity. New particles are added to a new species in a predetermined range, and the particle swarm operations are reexecuted. If the number of the current iteration reaches the required predefined convergence accuracy, the iteration is stopped, and the optimal solution is produced. The complexity and runtime of the algorithm increase due to the increased local-global information sharing and the last-eliminated principle. Nevertheless, experimental results show that the improved method can enhance the accuracy of the algorithm.

3. Experimental Study

Eleven test functions are adopted in this study to test the performance of the proposed IEPSO. In this test, [f.sub.1]-[f.sub.5] are unimodal functions, whereas [f.sub.6]-[f.sub.11] are multimodal functions. [f.sub.6] (Griewank) is a multimodal function with multiple local extrema, in which achieving the theoretically global optimum is difficult. [f.sub.7] (Rastrigin) possesses several local minima, in which finding the global optimal value is difficult. [f.sub.10] (Ackley) is an almost flat area modulated by a cosine wave to form a hole or a peak; the surface is uneven, and entry to a local optimum during optimization is easy. [f.sub.11] (Cmfun) possesses multiple local extrema around the global extremum point, and falling into the local optimum is easy. Table 1 presents the 11 test functions, where D is the space dimension, S is the search range, and CF is the theoretically optimal value.

3.1. Parameter Influence Analysis of Local-Global Information Sharing Term. This study proposes the addition of a local-global information sharing term, which involves the parameter [C.sub.3]. Therefore, the following exploration is conducted in a manner in which [C.sub.3] is selected by using the 11 test functions.

(1) When [C.sub.3] takes a constant value, constant 2 is selected.

(2) The linear variation formula of [C.sub.3] is as follows:

[mathematical expression not reproducible], (4)

where k is the control factor. When k = 1, [C.sub.3] is a linearly decreasing function; when k = -1, [C.sub.3] is a linearly increasing function. [C.sub.3]_start and [C.sub.3]_end are the initial and termination values of [C.sub.3], respectively. T is the iteration number, and [t.sub.max] is the maximum number of iterations.

Tables 2 and 3 and Figure 2 show that [C.sub.3] is a constant that linearly declines and linearly increases in three cases. When the parameter [C.sub.3] of the local-global information sharing term is a linearly decreasing function, the average fitness value of the testing function is optimal, and the convergence speed and capability to jump out of the local extrema are higher than those in the other two cases. When [C.sub.3] takes a constant, the algorithm cannot balance the global and local search, resulting in a "premature" phenomenon. When [C.sub.3] adopts the linearly decreasing form, the entire area can be quickly searched at an early stage, and close attention is paid to local search in the latter part of the iteration to enhance the deep search ability of the algorithm. While [C.sub.3] adopts a linearly increasing form, it focuses on the global-local information exchange in the latter stage of the iteration. Although this condition can increase the deep search ability of the algorithm, it will cause the convergence speed to stagnate. Therefore, compared with the linearly increasing form, the linearly decreasing form shows a simulation curve that converges faster and with higher precision.

Therefore, the selection rules of the parameter [C.sub.3] of local-global information sharing in a decreasing function are investigated in this study. The nonlinear variation formula of [C.sub.3] is as follows:

[C.sub.3] = ([C.sub.3]_start - [C.sub.3]_end) x tan(0.875 x [(1 - [t/[t.sub.max]]).sup.k]) + [C.sub.3]_end, (5)

where [C.sub.3]_start and [C.sub.3]_end are the initial and termination values of the acceleration term [C.sub.3], respectively, and k is the control factor. When k = 0.2, [C.sub.3] is a convex decreasing function; when k = 2, [C.sub.3] is a concave decreasing function. t is the iteration number, and [t.sub.max] is the maximum number of iterations.

Table 4 shows that when [C.sub.3] is a convex function, the precision and robustness of the algorithm can obtain satisfactory results on [f.sub.1]-[f.sub.5]. Table 5 shows that when [C.sub.3] is a convex function, the algorithm obtains a satisfactory solution and shows a fast convergence rate on [f.sub.6], [f.sub.8], [f.sub.9], [f.sub.10], and [f.sub.11]. In the unimodal test function, the IEPSO algorithm does not show its advantages because of its strong deep search capability. In the complex multimodal test function, when the convex function is used in [C.sub.3], the downward trend is slow in the early stage, thus benefiting the global search, and the downward speed increases in the later stage, thus benefiting the local search. When the concave function is used for [C.sub.3], the descent speed is fast in the early stage. Although the search speed is improved, the coverage area of the search is reduced, thereby leading to the convergence of the algorithm to the nonoptimal value. From the simulation diagrams (f)-(k), the convergence speed is observed to be slightly slow when [C.sub.3] is a convex function, but its ability to jump out of the local extremum and the accuracy of the global search are higher than those in the other two cases. When [C.sub.3] is a concave function, the convergence speed is faster than those in the other two cases, and the search accuracy is lower than that when [C.sub.3] is a convex function.

3.2. Comparison of Test Results. The 11 test functions in Figure 1 are used to compare the IEPSO algorithm with classical PSO, SPSO, differential algorithm (DE), and GA. The DE, GA, and PSO algorithms are all stochastic intelligent optimization algorithms with population iterations. The evaluation criteria of algorithm performance include speed of convergence and size of individual population search coverage. The differential optimization algorithm has a low space complexity and obvious advantages in dealing with large-scale and complex optimization problems. The GA has good convergence when solving discrete, multipeak, and noise-containing optimization problems. Based on the traditional PSO algorithm, the SPSO algorithm achieves the balance between global search and local search by adjusting the inertial weight (Figures 3 and 4).

The experimental parameters of the five algorithms are set, as shown in Table 6. Each test function is run independently 10 times, and the average is recorded to reduce the data error. The iteration is stopped when the convergence condition meets the convergence accuracy. The best average fitness value of the five algorithms is blackened. The standard deviation, average fitness, and optimal value of each algorithm are shown in Tables 7 and 8; Figures 5 and 6 plot the convergence curves of the 11 test functions.

Table 7 shows that the IEPSO has the best performance on [f.sub.1], [f.sub.2], [f.sub.3], and [f.sub.4]. The IEPSO algorithm obtains the theoretical optimal value on [f.sub.2]. DE can search the global solution on [f.sub.5]. The deep search capability of the IEPSO algorithm is considerably higher than that of the PSO and SPSO algorithms due to the increased global-local information sharing term and the last-eliminated principle. The crossover, mutation, and selection mechanisms make the DE algorithm perform well in the early stage of the global search. However, the diversity of the population declines in the latter stage because of population differences. The simulation diagrams (a)-(e) show that although the DE algorithm converges rapidly in the early stage, its global search performance in the later stage becomes lower than that of the IEPSO algorithm. When the GA is used to solve optimization problems, the individuals in the population fall into the local optimum and do not continue searching for the optimum solution. Therefore, in Figure 5, the simulation curve of the GA converges to the local optimum.

The test results in Table 8 indicate that the IEPSO has the best performance on [f.sub.6], [f.sub.7], [f.sub.8], [f.sub.9], [f.sub.10], and [f.sub.11] and that the DE and GA can obtain the theoretical optimal value on [f.sub.9] and [f.sub.11]. Although the GA and IEPSO algorithm can obtain the global optimal value on [f.sub.9], the IEPSO algorithm is more robust than the GA is. As shown in the simulation curve of Figure 6, the diversity of the population is maintained because the supplementary particles in the population are stochastic when the local optimal solution converges gradually. The IEPSO algorithm can jump out of the local extrema points in the face of complex multimodal test functions, and the number of iterations required is correspondingly reduced.

Table 9 shows the test results for the three improved PSO algorithms. The DMSDL-PSO algorithm in [25] is a PSO algorithm combined with differential variation and the quasi-Newton method, whereas the HPSOWM algorithm in [26] is a binary PSO algorithm based on wavelet transform. Table 9 shows that the IEPSO algorithm obtains the best value in 5 out of the 11 test functions, and the above analysis indicates that the IEPSO outperforms the other improved PSO algorithms.

4. Conclusion

In contemporary engineering design, solving the global optimization problems of multiparameter, strongly coupled, and nonlinear systems using conventional optimization algorithms is difficult. In this study, an improved PSO, that is, the IEPSO algorithm, is proposed on the basis of the last-eliminated principle and an enhanced local-global information sharing capability. The comparison and analysis of the simulation results indicate the following conclusions:

(1) The exchange of information between global and local optimal particles enhances the deep search capability of the IEPSO algorithm.

(2) The standard test function is used to simulate the parameter [C.sub.3] of the local-global information sharing term. The results show that the global optimization capability of the IEPSO algorithm is strong when [C.sub.3] is linearly decreasing. Moreover, the proposed algorithm can show the best search performance when [C.sub.3] is a nonlinear convex function.

(3) The last-eliminated principle is used in the IEPSO to maintain particle population diversity. Moreover, PSO is avoided in the local optimal value. A comparison of the IEPSO algorithm with the classical optimization algorithm and its improved versions verifies the global search capability of the IEPSO algorithm.

In summary, the comparative results of the simulation analysis reveal that, with the application of the last-eliminated principle and the local-global information sharing term to the IEPSO, the proposed algorithm effectively overcomes the disadvantages of the classical algorithms, including their precocious convergence and tendency to fall into the local optimum. The IEPSO shows an ideal global optimization performance and indicates a high application value for solving practical engineering optimization problems.

https://doi.org/10.1155/2018/5025672

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by Shanghai Rising-Star Program (no. 16QB1401000), Key Project of Shanghai Science and Technology Committee (no. 16DZ1120400), and the National Natural Science Foundation of China (Project no. 51705187), the Postdoctoral Science Foundation of China (Grant no. 2017M621202).

References

[1] Z. Zhou, J. Wang, Z. Zhu, D. Yang, and J. Wu, "Tangent navigated robot path planning strategy using particle swarm optimized artificial potential field," Optik, vol. 158, pp. 639-651, 2018.

[2] P. Du, R. Barrio, H. Jiang, and L. Cheng, "Accurate Quotient-Difference algorithm: error analysis, improvements and applications," Applied Mathematics and Computation, vol. 309, pp. 245-271, 2017.

[3] L. Jiang, Z. Wang, Y. Ye, and J. Jiang, "Fast circle detection algorithm based on sampling from difference area," Optik, vol. 158, pp. 424-433, 2018.

[4] H. Garg, "A hybrid PSO-GA algorithm for constrained optimization problems," Applied Mathematics & Computation, vol. 274, no. 11, pp. 292-305, 2016.

[5] J. Zhang and P. Xia, "An improved PSO algorithm for parameter identification of nonlinear dynamic hysteretic models," Journal of Sound and Vibration, vol. 389, pp. 153-167, 2017.

[6] R. Saini, P. P. Roy, and D. P. Dogra, "A segmental HMM based trajectory classification using genetic algorithm," Expert Systems with Applications, vol. 93, pp. 169-181, 2018.

[7] P. R. D. O. D. Costa, S. Mauceri, P. Carroll et al., "A genetic algorithm for a vehicle routing problem," Electronic Notes in Discrete Mathematics, vol. 64, pp. 65-74, 2017.

[8] V. Jindal and P. Bedi, "An improved hybrid ant particle optimization (IHAPO) algorithm for reducing travel time in VANETs," Applied Soft Computing, vol. 64, pp. 526-535, 2018.

[9] Z. Peng, H. Manier, and M. A. Manier, "Particle swarm optimization for capacitated location-routing problem," IFAC-PapersOnLine, vol. 50, no. 1, pp. 14668-14673, 2017.

[10] G. Xu and G. Yu, "Reprint of: on convergence analysis of particle swarm optimization algorithm," Journal of Shanxi Normal University, vol. 4, no. 14, pp. 25-32, 2008.

[11] J. Lu, W. Xie, and H. Zhou, "Combined fitness function based particle swarm optimization algorithm for system identification," Computers & Industrial Engineering, vol. 95, pp. 122-134, 2016.

[12] F. Javidrad and M. Nazari, "A new hybrid particle swarm and simulated annealing stochastic optimization method," Applied Soft Computing, vol. 60, pp. 634-654, 2017.

[13] J. Jie, J. Zhang, H. Zheng, and B. Hou, "Formalized model and analysis of mixed swarm based cooperative particle swarm optimization," Neurocomputing, vol. 174, pp. 542-552, 2016.

[14] A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, "Accelerating particle swarm optimization using crisscross search," Information Sciences, vol. 329, pp. 52-72, 2016.

[15] L. Wang, B. Yang, and J. Orchard, "Particle swarm optimization using dynamic tournament topology," Applied Soft Computing, vol. 48, pp. 584-596, 2016.

[16] M. S. Kiran, "Particle swarm optimization with a new update mechanism," Applied Soft Computing, vol. 60, pp. 670-678, 2017.

[17] H. C. Tsai, "Unified particle swarm delivers high efficiency to particle swarm optimization," Applied Soft Computing, vol. 55, pp. 371-383, 2017.

[18] S. F. Li and C. Y. Cheng, "Particle swarm optimization with fitness adjustment parameters," Computers & Industrial Engineering, vol. 113, pp. 831-841, 2017.

[19] Y. Chen, L. Li, H. Peng, J. Xiao, Y. Yang, and Y. Shi, "Particle swarm optimizer with two differential mutation," Applied Soft Computing, vol. 61, pp. 314-330, 2017.

[20] Q. Zhang, W. Liu, X. Meng, B. Yang, and A. V. Vasilakos, "Vector coevolving particle swarm optimization algorithm," Information Sciences, vol. 394, pp. 273-298, 2017.

[21] Y. Shi and R. C. Eberhart, "Empirical study of particle swarm optimization[C]//Evolutionary computation," in Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, vol. 3, pp. 1945-1950, IEEE, Washington, DC, USA, 1999.

[22] Z. Wang and J. Cai, "The path-planning in radioactive environment of nuclear facilities using an improved particle swarm optimization algorithm," Nuclear Engineering & Design, vol. 326, pp. 79-86, 2018.

[23] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, "Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients," IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240-255, 2004.

[24] K. Chen, F. Zhou, and A. Liu, "Chaotic dynamic weight particle swarm optimization for numerical function optimization," Knowledge-Based Systems, vol. 139, pp. 23-40, 2018.

[25] Y. Chen, L. Li, H. Peng, J. Xiao, and Q. Wu, "Dynamic multi-swarm differential learning particle swarm optimizer," Swarm and Evolutionary Computation, vol. 39, pp. 209-221, 2018.

[26] F. Jiang, H. Xia, Q. A. Tran, Q. M. Ha, N. Q. Tran, and J. Hu, "A new binary hybrid particle swarm optimization with wavelet mutation," Knowledge-Based Systems, vol. 130, pp. 90-101, 2017.

[27] R. Liu, J. Li, C. Mu, J. fan, and L. Jiao, "A coevolutionary technique based on multi-swarm particle swarm optimization for dynamic multi-objective optimization," European Journal of Operational Research, vol. 261, no. 3, pp. 1028-1051, 2017.

[28] W. Ye, W. Feng, and S. Fan, "A novel multi-swarm particle swarm optimization with dynamic learning strategy," Applied Soft Computing, vol. 61, pp. 832-843, 2017.

[29] L. Zhang, Y. Tang, C. Hua, and X. Guan, "A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques," Applied Soft Computing, vol. 28, pp. 138-149, 2015.

[30] Q. Cui, Q. Li, G. Li et al., "Globally-optimal prediction-based adaptive mutation particle swarm optimization," Information Sciences, vol. 418, pp. 186-217, 2017.

[31] D. Zhao and J. Liu, "Study on network security situation awareness based on particle swarm optimization algorithm," Computers & Industrial Engineering, vol. 125, pp. 764-775, 2018.

[32] H. Samareh, S. H. Khoshrou, K. Shahriar, M. M. Ebadzadeh, and M. Eslami, "Optimization of a nonlinear model for predicting the ground vibration using the combinational particle swarm optimization-genetic algorithm," Journal of African Earth Sciences, vol. 133, pp. 36-45, 2017.

[33] M. Dash, T. Panigrahi, and R. Sharma, "Distributed parameter estimation of IIR system using diffusion particle swarm optimization algorithm," Journal of King Saud University-Engineering Sciences, 2017, In press.

[34] B. Wang, S. Li, J. Guo, and Q. Chen, "Car-like mobile robot path planning in rough terrain using multi-objective particle swarm optimization algorithm," Neurocomputing, vol. 282, pp. 42-51, 2018.

[35] Z. Wang and J. Cai, "The path-planning in radioactive environment of nuclear facilities using an improved particle swarm optimization algorithm," Nuclear Engineering & Design, vol. 326, pp. 79-86, 2018.

[36] R. F. Lopes, F. F. Costa, A. Oliveira et al., "Algorithm based on particle swarm applied to electrical load scheduling in an industrial setting," Energy, vol. 147, pp. 1007-1015, 2018.

[37] F. Sheikholeslami and N. J. Navimipour, "Service allocation in the cloud environments using multi-objective particle swarm optimization algorithm based on crowding distance," Swarm & Evolutionary Computation, vol. 35, pp. 53-64, 2017.

[38] M. Petrovic, N. Vukovic, M. Mitic et al., "Integration of process planning and scheduling using chaotic particle swarm optimization algorithm," Expert Systems with Applications, vol. 64, pp. 569-588, 2016.

[39] Z. Zhang, Y. Jiang, S. Zhang, S. Geng, H. Wang, and G. Sang, "An adaptive particle swarm optimization algorithm for reservoir operation optimization," Applied Soft Computing Journal, vol. 18, no. 4, pp. 167-177, 2014.

[40] K. Li, L. Liu, J. Zhai, T. M. Khoshgoftaar, and T. Li, "The improved grey model based on particle swarm optimization algorithm for time series prediction," Engineering Applications of Artificial Intelligence, vol. 55, pp. 285-291, 2016.

[41] S. Gulcu and H. Kodaz, "The estimation of the electricity energy demand using particle swarm optimization algorithm: a case study of Turkey," Procedia Computer Science, vol. 111, pp. 64-70, 2017.

Xueying Lv, (1) Yitian Wang, (1) Junyi Deng, (2) Guanyu Zhang [ID], (1,3) and Liu Zhang [ID] (1,3)

(1) College of Instrumentation & Electrical Engineering, Jilin University, Changchun 130061, China

(2) College of Computer Science and Technology, Jilin University, Changchun 130022, China

(3) National Engineering Research Center of Geophysics Exploration Instruments, Jilin University, Changchun 130061, China

Correspondence should be addressed to Guanyu Zhang; zhangguanyu@jlu.edu.cn and Liu Zhang; zhangliu@jlu.edu.cn

Received 18 May 2018; Revised 20 September 2018; Accepted 2 October 2018; Published 5 December 2018

Academic Editor: Cornelio Yanez-Marquez

Caption: Figure 1: IEPSO algorithm flowcharts.

Caption: Figure 2: 11 test functions: (a) [f.sub.1] sphere function; (b) [f.sub.2] Schaffer function; (c) [f.sub.3] step function; (d) [f.sub.4] SumSquares function; (e) [f.sub.5] Zakharov function; (f) [f.sub.6] Griewank function; (g) [f.sub.7] Rastrigin function; (h) [f.sub.8] alpine function; (i) [f.sub.9] Shubert function; (j) [f.sub.10] Ackley function; (k) [f.sub.11] Cmfun function.

Caption: Figure 3: The change curve of [C.sub.3] with the number of iterations.

Caption: Figure 4: 11 test functions: (a) [f.sub.1] sphere function; (b) [f.sub.2] Schaffer function; (c) [f.sub.3] step function; (d) [f.sub.4] SumSquare function; (e) [f.sub.5] Zakharov function; (f) [f.sub.6] Griewank function; (g) [f.sub.7] Rastrigin function; (h) [f.sub.8] alpine function; (i) [f.sub.9] Shubert function; (j) [f.sub.10] Ackley function; (k) [f.sub.11] Cmfun function.

Caption: Figure 5: Unimodal functions: (a) [f.sub.1] sphere function; (b) [f.sub.2] Schaffer function; (c) [f.sub.3] step function; (d) [f.sub.4] SumSquares function; (e) [f.sub.5] Zakharov function.

Caption: Figure 6: Multimodal functions: (a) [f.sub.6] Griewank function; (b) [f.sub.7] Rastrigin function; (c) [f.sub.8] alpine function; (d) [f.sub.9] Shubert function; (e) [f.sub.10] Ackley function; (f) [f.sub.11] Cmfun function.
Table 1: 11 test functions.

No.                                Test function

[f.sub.1]                     Sphere: [f.sub.1] (x) =
                  [[summation].sup.D.sub.i=1][x.sup.2.sub i] [10]

[f.sub.2]     Schaffer: f(x, y) = 0.5 + ([sin.sup.2] [square root of
              [x.sup.2] + [y.sup.2]] - 0.5)/(1 + 0.001 [([x.sup.2] +
                              [y.sup.2])).sup.2] [33]

[f.sub.3]        Step: [f.sub.3] (x) = [[summation].sup.D.sub.i=1]
                          [[[x.sub.i] + 0.5].sup.2] [10]

[f.sub.4]     SumSquares: [f.sub.4](x) = [[summation].sup.D.sub.i=1]
                               i[x.sup.2.sub.i] [10]

[f.sub.5]      Zakharov: [f.sub.5] (x) = [[summation].sup.D.sub.i=1]
               [x.sup.2.sub.i] + [([[summation].sup.D.sub.i=1] 0.5 x
             i[x.sup.2.sub.i]).sup.2] + [([[summation].sup.D.sub.i=1]
                           0.5 x i[x.sub.i]).sup.4] [10]

[f.sub.6]    Griewank: [mathematical expression not reproducible] [10]

[f.sub.7]     Rastrigin: [f.sub.7] (x) = [[summation].sup.D.sub.i=1]
               [[x.sup.2.sub.i] - 10 cos(2[pi][x.sub.i] + 10)] [10]

[f.sub.8]       Alpine: [f.sub.8] (x) = [[summation].sup.D.sub.i=1]
                  (|[absolute value of [x.sub.i] sin [x.sub.i]] +
                                 0.1[x.sub.i]) [6]

[f.sub.9]                 Shubert: min [f.sub.9] (x, y) =
                 {[[summation].sup.5.sub.i] cos[(i + 1) x + i]} x
                  [[summation].sup.5.sub.i] i cos [(i + 1)y + i]}

[f.sub.10]     Ackley: [f.sub.10] (x) = -20 exp (-0.2[square root of
               ]1/D[[summation].sup.D.sub.i=1][x.sup.2.sub.i]] - exp
             (1/D [[summation].sup.D.sub.i=1] cos 2[pi][x.sub.i]) + 20
                                     + e [10]

[f.sub.11]       Cmfun: [f.sub.11] (x, y) = x sin([square root of
             [absolute value of x]]) + y sin([square root of [absolute
                                   value of y])

No.                   S                CF

[f.sub.1]     [[-100,100].sup.D]       0

[f.sub.2]     [[-100,100].sup.D]       0

[f.sub.3]     [[-100,100].sup.D]       0

[f.sub.4]      [[-10,10].sup.D]        0

[f.sub.5]     [[-100,100].sup.D]       0

[f.sub.6]     [[-600,600].sup.D]       0

[f.sub.7]    [[-5.12,5.12].sup.D]      0

[f.sub.8]      [[-10,10].sup.D]        0

[f.sub.9]      [[-10,10].sup.D]     -186.731

[f.sub.10]     [[-32,32].sup.D]        0

[f.sub.11]        [-500,500]        -837.966

Table 2: Unimodal test functions.

Functions   Criteria   [C.sub.3] = 2   [C.sub.3] = 2~0 k = -1

[f.sub.1]     Mean      7.22E + 02           1.07E - 06
               SD       3.97E + 04           1.11E - 12
              Best      4.05E + 02           2.41E - 08

[f.sub.2]     Mean      2.50E - 06           2.22E - 17
               SD       2.32E - 12           2.59E - 33
              Best      2.85E - 07                0

[f.sub.3]     Mean      1.99E + 02           8.03E - 07
               SD       2.15E + 04           1.04E - 12
              Best         35.81             3.95E - 08

[f.sub.4]     Mean         7.110             2.45E - 08
               SD          9.57              7.95E - 16
              Best         1.47              4.08E - 09

[f.sub.5]     Mean      1.74E + 03           3.86E - 04
               SD       2.44E + 05           1.49E - 07
              Best      8.29E + 02           9.78E - 06

Functions   Criteria   [C.sub.3] = 2~0 k = 1

[f.sub.1]     Mean           4.50E - 20
               SD            3.75E - 16
              Best           1.550 - 25

[f.sub.2]     Mean               5
               SD                5
              Best               5

[f.sub.3]     Mean           1.820 - 20
               SD            1.050 - 39
              Best           3.220 - 24

[f.sub.4]     Mean           8.200 - 20
               SD            5.110 - 38
              Best           8.40E - 26

[f.sub.5]     Mean           5.560 - 11
               SD            4.80E - 14
              Best           3.50E - 11

Table 3: Multimodal test functions.

Functions    Criteria   [C.sub.3] = 2   [C.sub.3] = 2~0 k = -1

[f.sub.6]      Mean         1.10              8.18E - 02
                SD        4.6E - 03           8.37E - 04
               Best         0.96              4.33E - 02

[f.sub.7]      Mean         35.03                4.10
                SD          8.44                2.5461
               Best         29.67                2.057

[f.sub.8]      Mean         2.93              1.33E - 03
                SD          0.30              3.10E - 08
               Best         2.02              1.34E - 05

[f.sub.9]      Mean       -186.7295            -186.7309
                SD       1.20E - 06                0
               Best       -186.7307            -186.7309

[f.sub.10]     Mean         7.649             2.35E - 04
                SD          0.415             4.39E - 09
               Best         6.513             5.73E - 05

[f.sub.11]     Mean       -837.9658            -837.9658
                SD       4.50E - 09                0
               Best       -837.9658            -837.9658

Functions    Criteria   [C.sub.3] = 2~0 k = 1

[f.sub.6]      Mean           4.90E - 02
                SD            5.960 - 04
               Best           1.230 - 02

[f.sub.7]      Mean           1.90e - 04
                SD              5.699
               Best           2.250 - 05

[f.sub.8]      Mean           5.280 - 10
                SD            2.230 - 12
               Best           5.830 - 13

[f.sub.9]      Mean           -186.7309
                SD                0
               Best           -186.7309

[f.sub.10]     Mean           1.80 - 11
                SD            2.270 - 22
               Best           2.500 - 12

[f.sub.11]     Mean           -837.9658
                SD            4.00E - 09
               Best           -837.9658

Table 4: Unimodal test functions.

                                                  [C.sub.3] =
Functions   Criteria   [C.sub.3] = 2~0 k = 0.2     2~0 k = 2

[f.sub.1]     Mean            2.660 - 20           5.51E - 10
               SD             2.650 - 39           2.87E - 19
              Best            9.120 - 24           1.38E - 11

[f.sub.2]     Mean                0                    0
               SD                 0                    0
              Best                0                    0

[f.sub.3]     Mean            6.21E - 19           6.04E - 10
               SD             2.63E - 36           7.79E - 19
              Best            1.81E - 27           3.08E - 11

[f.sub.4]     Mean            1.700 - 21           2.42E - 11
               SD             1.310 - 41           4.40E - 22
              Best            2.820 - 29           4.36E - 12

[f.sub.5]     Mean            1.65E - 10           2.83E - 11
               SD             3.30E - 20           3.59E - 11
              Best            2.17E - 11           1.00E - 11

Functions   Criteria   [C.sub.3] = 2~0 k = 1

[f.sub.1]     Mean           4.50E - 20
               SD            3.75E - 16
              Best           1.55E - 25

[f.sub.2]     Mean               0
               SD                0
              Best               0

[f.sub.3]     Mean           1.820 - 20
               SD            1.050 - 39
              Best           3.220 - 24

[f.sub.4]     Mean           8.20E - 20
               SD            5.11E - 38
              Best           8.43E - 26

[f.sub.5]     Mean           5.560 - 11
               SD            4.80E - 14
              Best           3.50E - 11

Table 5: Multimodal test functions.

                                                   [C.sub.3] =
Functions    Criteria   [C.sub.3] = 2~0 k = 0.2     2~0 k = 2

[f.sub.6]      Mean            4.10E - 02          4.79E - 02
                SD             3.33E - 04          7.07E - 04
               Best            1.250 - 02           5.7E - 03

[f.sub.7]      Mean            4.46E - 03          5.000 - 05
                SD             1.73E - 04          3.030 - 06
               Best            2.31E - 12          3.890 - 11

[f.sub.8]      Mean            2.20E - 10          3.74E - 10
                SD             6.70E - 20          2.47E - 12
               Best            3.710 - 16          4.36E - 11

[f.sub.9]      Mean            -186.7309            -186.7309
                SD                 0                    0
               Best            -186.7309            -186.7309

[f.sub.10]     Mean            1.130 - 11          2.05E - 10
                SD             2.210 - 22          4.37E - 12
               Best            5.060 - 14          1.75E - 10

[f.sub.11]     Mean            -837.9658            -837.9658
                SD                 0                    0
               Best            -837.9658            -837.9658

Functions    Criteria   [C.sub.3] = 2~0 k = 1

[f.sub.6]      Mean           4.92E - 02
                SD            5.96E - 04
               Best           1.23E - 02

[f.sub.7]      Mean           1.9Ee - 04
                SD              5.649
               Best           2.25E - 05

[f.sub.8]      Mean           5.28E - 10
                SD            2.23E - 12
               Best           5.83E - 13

[f.sub.9]      Mean           -186.7309
                SD                0
               Best           -186.7309

[f.sub.10]     Mean           1.84E - 11
                SD            2.27E - 22
               Best           2.50E - 12

[f.sub.11]     Mean           -837.9658
                SD            4.00E - 09
               Best           -837.9658

Table 6: Parameter settings.

                          Maximum      Dim of
Algorithm   Population   iteration   each object

PSO             40         1000          10
SPSO            40         1000          10

DE              40         1000          10
GA              40         1000          10
IEPSO           40         1000          10

Algorithm                            Others

PSO          [C.sub.1] = [C.sub.2] = 2; [R.sub.1] = [R.sub.2] = 0.5
SPSO        [omega] = 0.9-0.4; [C.sub.1] = [C.sub.2] = 2; [R.sub.1] =
                                 [R.sub.2] = 0.5
DE                                     --
GA                           GGAP = 0.5; PRECI = 25
IEPSO       [omega] = 0.9-0.4; [C.sub.1] = [C.sub.2] = 2; [C.sub.3] =
                  2-0; [R.sub.1] = [R.sub.2] = [R.sub.3] = 0.5

Table 7: Unimodal test functions.

Functions   Criteria      PSO          SPSO          DE

[f.sub.1]     Mean     1.33E + 03   3.08E + 03   7.31E - 12
               SD      2.53E + 05   1.21E + 06   2.25E - 23
              Best     1.14E + 03   1.20E + 03   2.42E - 12

[f.sub.2]     Mean     2.96E - 02   8.80E - 02   8.37E - 06
               SD      8.36E - 04   8.96E - 04   1.58E - 10
              Best     4.55E - 03    8.428734    7.55E - 10

[f.sub.3]     Mean     1.19E + 03   2.51E + 03   1.14E - 11
               SD      2.93E + 05   1.82E + 06   9.95E - 23
              Best     1.06E + 03   2.82E - 02   2.10E - 12

[f.sub.4]     Mean       82.38        82.10      3.36E - 13
               SD      6.86E + 02   1.40E + 03   9.95E - 26
              Best     1.15E + 02     37.39      1.15E - 13

[f.sub.5]     Mean     1.26E + 04   8.60E + 03   7.020 - 12
               SD      2.06E + 07   2.15E + 07   1.810 - 23
              Best     1.04E + 04   1.30E + 02   2.670 - 12

Functions   Criteria     IEPSO          GA

[f.sub.1]     Mean     8.920 - 22     11.696
               SD      2.650 - 39     44.192
              Best     7.720 - 27     4.660

[f.sub.2]     Mean         0        1.79E - 11
               SD          0            0
              Best         0        1.79E - 11

[f.sub.3]     Mean     6.210 - 19     7.430
               SD      2.63E - 36     5.833
              Best     1.810 - 27     4.542

[f.sub.4]     Mean     1.700 - 21     3.031
               SD      1.310 - 41     0.835
              Best     2.820 - 29     1.968

[f.sub.5]     Mean     1.65E - 10   3.62E + 03
               SD      3.30E - 20   3.44E + 05
              Best     2.17E - 11   2.53E + 03

Table 8: Multimodal test functions.

Functions    Criteria       PSO          SPSO          DE

[f.sub.6]      Mean        1.548        1.752      9.44E - 02
                SD         0.026        0.093      4.87E - 04
               Best        1.236        1.417         0.06

[f.sub.7]      Mean       57.737        43.405       11.945
                SD        117.768       65.178       16.502
               Best       35.981      3.17E + 01     6.398

[f.sub.8]      Mean        4.996        4.665      3.79E - 02
                SD      1.91E + 00      1.056      5.4E - 03
               Best        2.933        3.151      4.6E - 03

[f.sub.9]      Mean      -186.448      -186.048     -186.728
                SD      1.19E - 01    9.83E - 01   2.29E - 08
               Best     -1.87E + 02    -186.731    -186.7309

[f.sub.10]     Mean       13.134        15.560       1.613
                SD        14.260        2.163          0
               Best        2.861        12.719       1.613

[f.sub.11]     Mean      -740.326      -715.438     -837.966
                SD      8.74E + 03    7.23E + 03       0
               Best      -837.966      -837.697     -837.966

Functions    Criteria     IEPSO          GA

[f.sub.6]      Mean      4.0 - 02      1.006
                SD      3A0E - 04      0.018
               Best       0.013        0.794

[f.sub.7]      Mean     4.06E - 03     8.939
                SD      1.730 - 04     3.608
               Best     2.310 - 12     5.040

[f.sub.8]      Mean     2.22E - 10     0.423
                SD      6.70E - 20     0.051
               Best     3.710 - 16     0.086

[f.sub.9]      Mean      -186.731     -186.731
                SD          0        9.990 - 12
               Best     -186.7309     -186.731

[f.sub.10]     Mean     1.130 - 11     2.515
                SD      2.210 - 22     0.166
               Best     5.060 - 14     1.796

[f.sub.11]     Mean      -837.966     -837.966
                SD          0            0
               Best      -837.966     -837.966

Table 9: Three improved particle swarm algorithm test results.

Functions    Criteria     IEPSO      DMSDL-PSO [25]   BHPSOWM [26]

[f.sub.1]      Mean     8.92E - 22     4.73E - 10        42.40
                SD      2.65E - 39     1.81E - 09        52.11

[f.sub.3]      Mean     6.21E - 19     2.37E + 03         7.61
                SD      2.63E - 36     5.71E + 02         0.07

[f.sub.6]      Mean     4.19E - 02     8.664 - 05          --
                SD      3.43E - 04     4.964 - 04          --

[f.sub.7]      Mean     4.46E - 03     9.15E + 01        76.18
                SD      1.73E - 04     1.80E + 01        26.75

[f.sub.8]      Mean     4.44E - 10     1.31E + 02          --
                SD      6.745 - 40     5.82E + 01          --

[f.sub.10]     Mean     1.134 - 11     1.01E + 00         1.72
                SD      4.414 - 44     2.71E - 01          0
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Lv, Xueying; Wang, Yitian; Deng, Junyi; Zhang, Guanyu; Zhang, Liu
Publication:Computational Intelligence and Neuroscience
Date:Jan 1, 2018
Words:7222
Previous Article:Corrigendum to "A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks".
Next Article:Multilayer Hybrid Deep-Learning Method for Waste Classification and Recycling.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters