Printer Friendly

Solving Equations Systems Using Artificial Intelligence--a Survey.

1 INTRODUCTION

Even if solving equations systems is a very old and discussed problem in Algebra, we can say that this issue continues to be of great interest to both mathematicians and computer scientists, the proof being the large number of scientific papers about numerical methods that have appeared lately. Why this particular and intense interest in solving equation systems? The answer is simple: real world can be easier and accurate described, understood and modelled using equations systems.

Before computers, only mathematicians were primarily concerned with the study of solving the equation systems. Their interest in the issue comes from antiquity, when in The Nine Chapters on the Mathematical Art, a Chinese book composed by several generations of scholars from the 10th-2nd century BCE, a technique for solving equations systems very close to Gaussian elimination is presented. Also, ancient Greek mathematicians' contributions must be taken into consideration. Other important contributors to this issue were Michel Rolle (1690), Newton (1720), Nathaniel Hammond (1742) and Thomas Simpson (1755).

The computer that emerged in the last century has become the primary tool in the implementation of equations systems solving methods, equations systems that were increasingly bigger, practically prohibitive to solve them with the pencil on the paper. More, the computers generate a second stage in the development of methods of solving the equation systems. The next stage begins with using parallel computers, which required the adaptation of the known solving methods, allowing the real-time solving of large and very large systems (millions of equations).

The Artificial Intelligence and its results in recent decades have led to a new stage in solving equation systems. Different from traditional mathematical approach, this stage is not easily accepted by mathematicians.

Nowadays, mathematicians consider two main types of methods for solving equation systems: direct methods and iterative methods.

Mainly, direct methods consist of converting the system of equations into an equivalent triangular system that is easier to solve. Direct methods make it possible to determine the exact solution of the system only in the ideal case when there are no rounding errors due to the finite representation of numbers in computer memory. Major problems arise when systems with more than 100 equations are solved with direct methods, when these methods are unusable due to the accumulation of rounding errors that seriously alter the solution. From the complexity point of view, in general, the number of arithmetic operations performed is O([n.sup.3]), which is not very encouraging for a software developer. The most known methods of this type are: Cramer, Gaussian elimination, Gauss-Jordan, LU factorization (decomposition) etc.

The iterative methods determine the solution on the basis of an iterative process, starting from an initial approximation of the solution, in the conditions in which the equations system is well conditioned. In practice, the iterative process is stopped after a preset number of steps (unknown a priori) or after certain conditions have been met. By iterative methods, the solution of a system of linear equations is obtained by generating a string that tends to its exact solution. A major advantage of iterative methods is that in practice rounding errors and truncation errors may be insignificant. We note that even if the solution is affected by truncation errors that are not encountered in direct methods, the iterative solution may have better accuracy than the solution obtained through direct methods [51]. The most known methods of this type are: Gauss-Seidel, conjugate gradient, Quasi-Newton methods, Broyden, Chebyshev, etc.

A short presentation of advantages and disadvantages of some classical methods is made in Table 1.

In conclusion, we can say that the right choice of a classical numerical method is not simple, and more, improper selection can lead to wrong solutions.

Besides the classical numerical methods for solving equations systems, in last the decades new methods that use artificial intelligence techniques to determine the solutions were proposed. The hybridization of these new methods with classical methods has also led to promising results.

2. TEHNIQUES FROM AI FOR EQUATIONS SYSTEMS SOLVING

In this section, short descriptions of the most used methods from artificial intelligence used for solving equations systems are made.

Monte Carlo method

Monte Carlo method (MCM) is an old search algorithm, dating back to the 1940. MCM was used for the first time in physics, after which it was used in a lot of fields such as computational biology, astrophysics, microelectronics, robot planning, species conservation, weather forecasting, air traffic control, optimization problems and many more. MCM is a probabilistic method, that uses random numbers to either simulate a stochastic behavior or to estimate the solution of a problem. From computer science point of view, Monte Carlo tree search (MCTS) is a heuristic search algorithm used in decision processes, based on the analysis of the most promising moves in the decision space and building a search tree according to the results. Expanding this search tree is based on random sampling of the search space. At the heart of MCM, it is a uniform random number generator that produces an infinite stream of random numbers inside interval (0, 1). MCM is naturally parallel because many independent samples are used to estimate the solution. These samples can be computed in parallel, thereby speeding up the solution finding process. We note that in the case of an optimization problem, there is a risk of getting stuck in local minima. A detailed description of MCM can be found in book [54].

Artificial neural networks

An artificial neural network (ANN) is a computing system inspired by the biological neural networks that constitute human's brain. Such systems learn by examples as the human brain often does. The basic elements of ANN are artificial neurons. An ANN is similar to a directed graph in which nodes are represented by artificial neurons and edges with weights, that are connections between input and output neurons. Each input has a weight, an information used by the ANN to solve a problem. On a standard neural network, a lot of artificial neurons are arranged in three layers: input, hidden and output. In this standard model, we have a fully connection i.e. each hidden neuron is fully connected to the every neuron in its previous layer (input) and to the next layer (output). An ANN learns by adjusting its weights and bias (threshold) iteratively to yield desired output, after a process called training. The training process is performed using defined set of rules, i.e. a learning algorithm. In practice we have many ANN architectures, most often determined by the hidden layer's connection mode: Perceptron, Radial Basis Function Network, Multilayer Perceptron, Recurrent Neural Network, Hopfield Network, Boltzmann Machine Network, Convolutional Neural Network, Modular Neural Network, etc.

Genetic algorithms

Genetic Algorithms (GA) mimic the processes observed in natural evolution, like natural selection and genetics, following the principles of first laid down by Charles Darwin of "survival of the fittest". In general, the scope of GA is to solve optimization problems. The father of the GA is considered to be John Holland, in the early 1970's [39]. GAs are iterative processes, that use a specific population called generation, who exists on each iteration. At each iteration, for all individuals, the fitness value is computed, i.e. the value of the objective function in the optimization problem being solved. To improve the fitness function values, operations like crossover, mutation and selection are used on each iteration. We note that the elitism selection process is the basic of most GAs. The process is finished after a predefined number of iterations, a solution was found or if some other conditions are full-filed. Regarding to "solution was found" as a stopping criterion, there is a problem of GAs, because the current solution is compared only with the previous solution, so, we do not know if the algorithm is not into a local optima. Another limitation of GAs is their bad complexity, especially when we have large search space size, big population and complex problems to be solved.

Particle swarm optimization (PSO)

This method, proposed by Kennedy and Eberhart [52], is a population-based on stochastic approach, designed for solving optimization problems, a method that simulates the "social behavior" of living communities. In PSO, software agents called particles, move in the search space of the optimization problem. The position of each particle represents a candidate solution of the problem and in order to improve this, each particle searches better positions in the search space by changing its velocity. The main difference between GA and PSO methods is that PSO does not have crossover and mutation operators, but the particles adapt their positions (i.e. improve the fitness) using self-internal velocity, self-memory and a sharing mechanism for information. As advantages, PSO is easy to implement and has only a few parameters to adjust, compared to GA. The major disadvantage is the tendency to a premature convergence in mid optimum points. Much more about PSO method can be found in book [17].

Ant Colony Optimization (ACO)

Ant Colony Optimization (ACO) is a metaheuristic optimization based on a probabilistic technique, proposed by Marco Dorigo in his PhD thesis in 1992 [21]. The algorithm is inspired by ants' life, i.e. searching an optimal path between their colony and source of food. In time, three basic models are proposed: Ant System, Ant Colony System (ACS), and MAX-MIN Ant System (MMAS). The most important advantages of ACO are: inherent parallelism and possibility to use it in dynamic applications. The main disadvantages are: the probability distribution can be changed by iterations and an uncertain time to converge, even if the convergence is guaranteed.

3. METHODS FROM AI FOR SOLVING EQUATIONS SYSTEMS

A brief description of the proposed methods for solving equation systems using intelligent artificial techniques is made in this section.

3.1 Monte Carlo Methods (MCM)

A method that use Monte Carlo algorithm is proposed in paper [45] to solve the nonlinear systems. The proposed method is very simple, because it does not require the differentiation of left side functions of the system. Another advantage is that the initial estimation of the solution is not necessary. The main disadvantages consist in arbitrary accuracy of the solutions, its applicability only for smaller systems, a slower convergence and a large number of generation needed for large systems.

In technical report [34], an improved solving process for large -linear and nonlinear-equations systems that uses sequential Monte Carlo technique is presented.

A Monte Carlo algorithm for solving the equations with polynomial nonlinearity and systems of algebraic quadratic equations is proposed in work [23].

A parallelization of quasi-Monte Carlo method for solving very sparse systems of linear equations is proposed in paper [73]. The associated matrix of the equations system is fully stored on each processor, a scheme that allows each processor to generate independently of each other the random walks. An advantage of the proposed method consists in possibility to solve equations systems that differ only by right-hand side vectors.

In order to solve systems of linear equations using Monte-Carlo method, three methods and their average complexity for generating the trajectories of the Markov chain associated to the linear equations system were proposed in [95].

For solving nonlinear and linear systems of equations, in paper [16] Monte Carlo method was used. It was used queueing networks for Monte Carlo method combined with some results from game theory. Examples with full explanations of the proposed method are also presented in the paper.

An improvement of the performance of the Monte Carlo algorithm from time complexity point of view by reusing the random walks to estimate the other unknowns of the equations system is proposed in paper [47].

3.2 Artificial Neural Networks (ANN)

In paper [15] some artificial neuronal networks circuits architectures that can be implemented on VLSI circuits, are investigated for solving systems of linear equations. Simulating the proposed architectures on a computer, using a set of relevant examples, validates the proposed architectures.

A method for solving linear equations systems by artificial neural network is proposed in paper [71]. In this approach, the used neural network is a Hopfield network with a Lyapunov energy function.

A method based on a neural network for solving nonlinear equations systems for which the computing time does not depend on the size of the system, is proposed in work [83].

In paper [75] recurrent neural networks are proposed for solving nonlinear equations systems, by approximating them with a multilayer perceptron (MLP). The iterative method that has been simulated in neural network was Newton's method. The backpropagation learning algorithm was used in this work.

A recurrent neural network for solving systems of inequalities and equations systems is proposed in [109]. Their approach is a generalization of the method proposed in [11]. Also, the digital realization of the proposed recurrent neural networks (continuous time and discrete time), that use a one-dimensional systolic array of processors is presented. We note that the number of neurons in proposed approach is increasing only linearly with the problem size, which is an important advantage.

In paper [72] a neural network to solve nonlinear equations systems is proposed. The learning algorithm uses is a back propagation algorithm. The networks for 2x2 and 3x3 equation systems are exemplified and finally the general case, nxn system, is described.

A neural network algorithm for solving nonlinear equations systems using a gradient descent rule with variable step-size is proposed in work [59]. The convergence of the proposed algorithm is proved in this paper.

In paper [32] two neural network algorithms are proposed for solving nonlinear systems: first for finding singular and the second for finding multiple roots. The convergence of proposed algorithm was proved in this work.

A neural network approach to solve a large linear equations system is proposed in paper [99]. In their approach, the authors consider that there are no known details about the internal structure of the equations system, i.e. a Black- Box approach. The main part of the proposed method consists in identification of system details, more exactly, finding a relationship model, determining the system orders and approximation of the unknowns function by neural network model, using a multi-layer feed-forward artificial neural networks.

A feed-forward neural network capable of solving nonlinear algebraic systems with polynomials equations is proposed in paper [29]. The proposed method is validated by solving a 3x3 nonlinear equations system and comparing the obtained results with those obtained by Mathematica software package. The proposed neural network is capable of finding all roots of a nonlinear equations systems--eight in the case of reported example- even if this requires more iterations/steps.

In paper [30] a neural network architecture that uses a back-propagation algorithm and an adaptive learning rate procedure for solving nonlinear equations systems, is proposed.

3.3 Genetic Algorithms (GA)

A parallel evolutionary algorithm for solving systems of nonlinear equations is proposed in paper [121]. The results are obtained using a simulation of parallelism on a serial computer, so, there are missing useful indicators like speedup or scalability.

Methods for solving non-linear equations using GAs are proposed in paper [74]. Square function, absolute function and a function for minimizing a linear combination of the equations are used to transform the equations system into a suitable problem for genetic algorithms.

In paper [31], the proposed method transforms the nonlinear equations systems into a multi-objective optimization problem. It is considered a multi-objective optimization problem because each equation is considered an objective. An evolutionary algorithm is used for solving the problem. A solution is a vector whose length is equal to the number of system's unknowns. To obtain new candidate solution, crossover and mutation operators are used. In order to compare the solutions between them, the Pareto dominance [18] relationship is used. The process is iterative, ending after a preset number of iterations. Real-world examples presented in the paper (application from interval arithmetic benchmarks, application from neuropsychology, chemical equilibrium application, kinematic application, combustion application and economics modelling), with experimental data and graphical representations, give a clear and accurate picture of the algorithm.

A simple genetic algorithm for solving linear equations systems is proposed in paper [19]. The algorithm uses a standard fitness function that must be minimized by genetic algorithm.

A genetic algorithm for solving linear systems of equation is proposed in paper [42]. The approach is classical in terms of genetic algorithms, i.e. the equations system is rewritten in terms of objective functions that must be minimized in terms of their absolute values. To compute the fitness of each chromosome, the coefficient of multiple determination ([R.sup.2]) [27] was used, also called the squared multiple correlation coefficient. The value of [R.sup.2] (equal with 1 or very close) or a preset number of iteration are the criteria to stop the algorithm. In the reported experiments, the solutions obtained with proposed algorithm are compared with the results obtained with Gaussian elimination. It can be seen from these results that the genetic algorithm always finds solutions, and more, finds multiple sets of solutions (when available) for a given system of equations.

In paper [43], the effects of working parameters of genetic algorithms in solving linear equations systems are studied.

An adaptive genetic algorithm for solving ill-conditioned equations system is proposed in [60]. The main changes were made to penalty function, population migration and elite individuals reservation. The new proposed fitness function is a combination between a classical fitness function and a penalty function.

In paper [56] a rugged genetic algorithm for solving simultaneous nonlinear equations is proposed. Unlike the classic approach where a chromosome is a string, in the proposed approach a chromosome is a string-ring, i.e. the last bit is connected to the first and the crossover operator is annular. A constraint handling strategy, described in [55], was used to improve the algorithm. In their experiments, the authors use two different binary encodings: Gray and Weighted binary strategies, the errors in case of first seem to be less.

To improve the population diversity in a genetic algorithm used to solve equations systems, in paper [94], pairs of symmetric and harmonious individuals are introduced.

3.4 Particle Swarm Optimization (PSO)

A PSO algorithm is proposed in [9] to solve systems of unconstrained equations. The proposed algorithm is able to find multiple roots of the system in only one run using a modified fitness function.

In paper [118] a modified PSO algorithm through the control of the search dynamics using a Proportional--Integral--Derivative (PID) controller for solving systems of equations is proposed.

The Bacterial Foraging Algorithm (BFA) [84] was improved using PSO in order to solve nonlinear equations systems in paper [65].

In paper [4] a PSO method for solving nonlinear equations systems is proposed. In this paper, it is demonstrated that a system of equations can be transformed into an optimization problem. It is also showed that if the swarm gets bigger, the number of required iterations on PSO algorithm goes down. The experimental data also shows the preference of the PSO method of finding roots which are closer to the starting point of iterations.

A PSO method is proposed in [44] for solving systems of nonlinear equations. In the proposed algorithm, new relationships are introduced for updating the particles. The reasons for the changes in basic PSO algorithm were to diminish the tendency to stay in local optima and to increase the convergence speed. The major modification consists in introducing a set of four random variables and a greater dependence of working parameters on the values of the best individuals/particles. Moreover, the worst particles are used in the new generation process. Five standard benchmark functions are used to test the performance of the modified PSO algorithm. The results show a good convergence, about 100 iterations. The results in solving nonlinear systems are compared with those reported in other works.

A PSO algorithm for solving nonlinear equations systems -transformed into an optimization problem- is proposed in paper [108]. Chaotic maps are used to improve the effectiveness of the proposed algorithm and the Logistic map [103] seems to be the optimal choice. Nine known examples of nonlinear systems are used in experiments and the results are compared with results obtained by many other methods from literature.

For a real problem that consists in analysis of flow in water distribution networks using Content Model, solving equations systems (linear and nonlinear) is needed. In paper [79] a PSO method is used instead of Newton method. The major advantage of the proposed PSO method is that is able to find the global optimum in an uncertain problem, while Newton' methods cannot do this in the case of non-convex problems. The PSO method is modified for constrained optimization problem. The experiments are made using the modified PSO method and the General Gradient Algorithm (GGA), an iterative method for solving equations systems. The experiments show that PSO is recommended in case of non-convex problems while GGA is better in case of convex problems.

A hybridization between PSO algorithm and Artificial Bee Colony (ABC) [49] algorithm is proposed in [48] for solving nonlinear equations systems.

In paper [122] a hybridization between PSO and Artificial Fish Swarm (AFS) [58] algorithm is proposed for solving ill-conditioned linear systems of equations. The proposed algorithm combines good local convergence of PSO with global convergence performance of the AFS.

3.5 Ant Colony Optimization (ACO)

Based on basic ACO algorithm, in paper [77] an improved variant of ACO for solving nonlinear equations systems is proposed. Experimental results show a decreasing in number of iterations required to obtain the solution.

A robust method based on ACO for solving ill-conditioned linear equations systems is proposed in work [33].

For solving nonlinear equations systems, in paper [110] an improved ACO algorithm is proposed. The improvement consists in using quantum rotation gate [35] to represent and update the ants' pheromone.

3.6 Other Metaheuristics Used

An evolutionary algorithm for solving equations systems inspired from political life -very close to what we call today globalization - named Imperialist Competitive Algorithm (ICA) [5] is proposed in [3]. In the beginning, the individuals in evolutionary process are countries and finally we have only one empire that represents the solution of the equations system. In experimental part, comparisons of the obtained results with other known methods are made.

Using Imperialist Competitive Algorithm (ICA) with a multi-population technique, a parallel algorithm for solving systems of nonlinear equations is proposed in work [66]. In this multi-population approach, each processor has its own population and working parameters. The migration technique is also used, i.e. each processor can select the best or worst individuals and sent to the other processors. The authors reported a super linear performance of parallel algorithm proposed.

Using a combination between three different mutation strategies in a differential evolution algorithm, a method to solve nonlinear equations systems is proposed in [91].

A hybridization between Harmony Search (HS) metaheuristic [25] and Differential Evolution (DE) [102] method is used for solving systems of nonlinear equations in work [92]. HS method is used for global search and is helped by DE through updating the HS's parameters to improve the efficiency and robustness of the first. We must mention that HS is critically analyzed in the paper [120], where it is proved that this metaheuristic is only a special case of evolution strategies.

In [93] a method to find multiple roots of nonlinear equations systems based on Harmony Search (HS) optimization is proposed. To avoid previously computed solutions, in proposed algorithm a penalty function was implemented to avoid the convergence to already located solutions.

A Memetic Algorithm (MA) [82] for solving linear equations systems is proposed in paper [67]. In proposed approach, the problem of solving linear system of equations is transformed into a multi-objective optimization problem, where each equation is used to define an objective function. The proposed algorithm is able to find solutions of the given linear system of equations, even in cases where traditional methods fail. The experimental results were validated by Mathematica software.

In paper [69]a metaheuristic inspired from human brainstorming (BSO) [12] combined with concepts from graph theory for solving nonlinear systems of equations, is proposed. The creativity needed in brainstorming optimization process is modelled like a search space using kpartite graphs. In the cases when no exact solution exists, an approximate solution is good and it can be obtained by the proposed method, i.e. solutions whose fitness is very close to 0. Also, systems with multiple solutions can be solved; the proposed method has been able to find all the solutions inside of a given interval preset by user.

An improved variant of Cuckoo Search, algorithm (CS) [112] named Double Mutation Cuckoo Search Algorithm (DMCS) for solving systems of nonlinear equations was proposed in paper [14]. The basic idea is to make the mutations according to certain probability values, to avoid the sensitivity of the basic CS algorithm to initial values, to improve local search and to increase search space, the final effect being to improve the convergence of the DMCS algorithm. Also, the proposed algorithm is not sensitive to continuity or differentiability problems of equations.

A modified Cuckoo Optimization Algorithm [COA] for solving systems of nonlinear equations is proposed in [2]. The focus on this work is on the main disadvantage of the standard COA method [90] in solving nonlinear equation systems related to its reduced ability to find an accurate solution or to fall into a local optimum, even if the method converges very rapidly to the vicinity of the global optimum. The experimental results show a less number of fitness function evaluations compared to other evolutionary methods such as PSO, GA and COA.

Solving nonlinear equations systems is transformed into an optimization problem and, Multistart and Minfinder global optimization methods was used to solve the system in paper [107]. Multistart and Minfinder are effective due to their ability to locate both the global optimum and all the local minimum of the objective function. Thus, experimental results reported by authors indicate that all possible roots of the equations systems can be found. More, the proposed method does not depend on the problem formulation.

A self-adaptive levy mutation operation [57] is used for solving nonlinear equations systems in method proposed in paper [61]. In this approach, each equation represents an objective function and as a result, the entire equations system is seen as a multi-objective optimization problem.

Greedy Randomized Adaptive Search Procedure (GRASP) [24] is used to solve nonlinear equations systems in work [38]

The nonlinear equations system is transformed into a bi-objective optimization method in work [28]. The experimental results obtained with proposed method are compared with other multi-objective and single-objective methods. The results show a good scalability and robustness of the proposed method.

To find multiple roots of nonlinear systems a merit function that creates repulsive regions in the neighborhoods of the previous roots is used in method proposed in [37]. Solving the nonlinear system is seen as a global optimization, and for this, the simulated annealing (SA) algorithm is chosen. Experimental examples include real-world applications, more precisely heavy issues from chemistry.

A hybridization between Glowworm Swarm Optimization (GSO) [53] and Hooke-Jeeves pattern search [80] used for solving system of nonlinear equations is proposed in [114]. In proposed algorithm, GSO method is used to obtain the initial value of Hooke-Jeeves method.

Combining the methods Glowworm Swarm Optimization (GSO) and Simplex Search method [81] led in work [88] to a new algorithm for solving nonlinear equations systems. The proposed algorithm can find multiple solutions if they exist and can be parallelized. More, no restriction for equations, i.e. equations does not need to be continuous and differentiable.

An improved Glowworm Swarm Optimization (GSO) algorithm is proposed in [123] for solving nonlinear equations systems. The improvement consists of adding the leader concept, i.e. the best individual on a generation. Unlike other techniques in evolutionary algorithms, in the proposed algorithm, after each generation, all individuals move to the leader area, improving the global search.

In paper [124] a hybrid method based on Invasive Weed Optimization (IWO) [76] and Differential Evolution (DE) [102] for solving nonlinear equations systems is proposed. Experimental results obtained for small equations systems compared with results obtained with EA and PSO methods validate the proposed method and show smaller errors of solutions obtained with it.

The method proposed in [86] uses Invasive Weed Optimization (IWO) for solving nonlinear systems. The method can find the complex roots of the equations system.

For solving systems of nonlinear equations with higher dimension, a method that combines Firefly Algorithm (FA) [113] and Pattern Search (PS) [40] technique is proposed in work [119]. The first method is used in global exploration phase while the second is for exploitation phase.

Using an improved Spiral Dynamics Inspired Optimization (SDIO) [104] metaheuristic, in paper [101] a method that can find all roots of a nonlinear equations system, is proposed. We note that the method has the ability to find all the roots within a given bounded domain, as it can be seen from the experiments reported by the authors.

An algorithm for solving nonlinear equations systems using a new Probabilistic-Driven Search algorithm (PDS) is proposed in [41]. In proposed method, the nonlinear equations system is transformed into a single-objective optimization problem. The main advantage of the proposed probabilistic method is the ability to overcome local optimal solutions. The experimental results are compared with those reported in [31].

In paper [20] a method based in Quantum Computing [6] for solving nonlinear systems of equations is proposed. A modified Grover's quantum search algorithm is proposed, resulting in a more efficient computing process. Each variable of equations system is represented using a register in the quantum computer.

3.7 Hybrid Methods

In the last years, a lot of hybrid methods are proposed for solving equations systems. These methods combine metaheuristics with performant iterative methods of system solving, the results being methods with good spatial and temporal complexity and accurate solutions.

In paper [50] a hybridization between a genetic algorithm (GA) and an iterative method for solving a system of nonlinear equations, is proposed. GA is used to locate initial guesses for Newton method.

One of the methods for solving equations systems is the method of Successive Over-Relaxation (SOR), a faster convergent variant of the Gauss--Seidel method [97]. Regarding to relaxation factor, - which greatly influences the convergence rate of the SOR method- it is very difficult to estimate the optimal value of it. The goal of the evolutionary component in hybrid method proposed in paper [36] is to find the best value for the relaxation factor. The convergence of the proposed hybrid algorithm is also proved. From evolutionary process, the proposed algorithms use a deterministic recombination and a random mutation. In first experiments, the hybrid algorithm was tested for solving linear systems of equations. The results show a great advantage of the proposed algorithm which consists in a few orders of magnitude difference in errors.

In paper [46], a hybridization between Jacobi method and a time variant adaptive evolutionary algorithm is proposed for solving linear equations systems. The convergence of the hybrid algorithm is proved, and validated theoretically and experimentally in this paper. The role of the evolutionary part is to self-adapt the relaxation factor used in Jacobi method. With respect to the evolutionary part of the algorithm, the recombination involves all individuals within a population and a mutation is achieved by performing one iteration of Jacobi method. The proposed hybrid algorithm is also very simple and easy to implement both in sequential and parallel.

Paper [96] proposes a preconditioning technique based on a genetic algorithm (GA) to solve a non-linear equations system by a fixed-point method. More exactly, the genetic algorithm establishes a correct order of the equations of a non-linear system to be solved by the fixed-point method. The objective function of GA evaluates the number of equations which each individual of the population is able to solve. The detailed demonstration of the applicability of the proposed method is made using a real world problem.

A hybrid approach between chaos optimization algorithm (COA) and quasi-Newton method for solving systems of nonlinear equations is proposed in paper [64]. The COA is used to search a feasible initial guess for Newton method while the last method is used for its high speed of convergence in obtaining an accurate solution.

A hybridization between Conjugate Direction (CD) method (also known as Powell's method) [87] and Particle Swarm Optimization (PSO) is proposed in [78] for solving systems of nonlinear equations. The CD method helps PSO to jump over local minima. PSO is a stochastic algorithm that helps CD with a good initial solution, i.e. each component helps the other, which leads to a more global efficiency of the proposed hybrid algorithm (CDPSO). The experimental results reported show a better convergence of CDPSO compared to CD, PSO and Newton methods.

In paper [22], the authors propose a hybrid method for solving systems of nonlinear equations that uses GA. If in classical methods solving a system of nonlinear equations often involves transforming it into a linear system, in proposed method, this transformation is replaced by a genetic algorithm. In first part of the proposed algorithm, Gauss--Legendre Integration [89] scheme is used, a tool to approximate definite integrals. Using n-point formula, a system of non-linear equations of dimension 2n is obtained. Because it is improper to use a Newton method to solve this equations system, a genetic approach is used. More exactly, a fitness function must be minimized by a genetic algorithm, i.e. the maximum absolute value of each equation in the system. In our opinion, the major inconvenience of the proposed algorithm consists in transposing a given system of nonlinear equations into a problem of form Gauss--Legendre Integration.

In paper [106] a hybridization between Electromagnetic Meta-Heuristic method (EM) [8] and Newton-GMRES method is proposed in order to solve nonlinear equations systems. After the transformation of equations system into an unconstrained minimization problem, the EM method is used to find a good initial guess for Newton-GMRES method. Large and sparse systems of equations are used in experiments and the results obtained with the proposed method are compared with other known iterative methods.

A hybrid method based on Genetic Algorithms (GAs) and Artificial Neural Networks (ANN) for solving ill-conditioned system of linear equations is proposed in [98]. The GA component is responsible for finding an initial solution for the equations system, while ANN has the role of refining the primary solution.

Paper [68] proposes a simplified form of a memetic algorithm to improve the convergence of iterative methods for solving systems of equations, i.e. the memetic algorithm is used to determine an initial vector favorable to a rapid convergence for iterative methods. The experimental results obtained with other iterative methods like conjugate gradient, Newton, Chebyshev and Broyden, recommend the proposed hybridization, that is a better alternative in comparison with that random chosen values.

A hybridization between a genetic algorithm and an augmented Lagrangian function to solve equations systems is proposed in [85]. In fact, the original constrained problem is replaced by a sequence of unconstrained sub-problems.

In paper [1] is proposed a hybrid method for solving ill-conditioned equations system for which the associated matrix has the condition number greater than 1. In the proposed algorithm, there is a hybridization between an iterative method and an evolutionary method, more precisely, conjugate gradient (CG) [97] and flower pollination algorithm (FPA) [115]. That is in fact a combination between a faster convergence and a faster global search. The main steps of the proposed algorithm are: convert the equations system into a fitness function, use the FPA part to obtain a good initial guess and finally use the CG part to solve the system. The reported results show a better accuracy of the solution obtained with proposed algorithm compared with other methods.

Figure 1 shows researchers' concern in solving equation systems using artificial intelligence techniques, reflected in the number of published scientific papers. An increase in this interest can be seen especially in the last decade. Also in the last decade one can observe a dominance of metaheuristic methods and an increase of the interest for hybrid methods.

4. CONVERGENCE AND COMPLEXITY

It is known that there is a skepticism of mathematicians in solving numerical problems by other techniques than purely mathematical techniques, i.e. methods that use techniques from artificial intelligence. They claim in particular that there are no rigorous mathematical demonstrations regarding to the convergence of non-traditional methods, and unfortunately they are right in many cases. Nevertheless, there are a few published papers in which, for the particular case of the proposed method, the convergence of the method is mathematically demonstrated.

In paper [4] it is proved that solving a nonlinear equations system can be transformed into an optimization problem. The authors point out that the demonstration is valid only if the system of equations is consistent, that is, at least one solution. The demonstration is based on the description of the system of equations in the form of coordinate functions and their minimization in the case which a solution exist. As it was seen in the previous section, most approaches transform the solving of a system of equations into an optimization problem, so, we can take into consideration the mentioned paper.

In [7] the authors prove the convergence of Monte Carlo method in solving sparse linear systems for some particular cases of the associated matrix of the system. There are identified classes of associated matrices with guaranteed convergence: strictly diagonally dominant, generalized diagonally dominant and block diagonally dominant. With regard to the Monte Carlo method, in same paper the difficulty of accurately establishing the order of complexity due to the stochastic nature of the method was shown. At first sight we can speak of a complexity class O(n), but a strong dependence of complexity on the size of the system of equations, is marked in the mentioned paper. Also, a dependence of the complexity on the number of histories and the length of random walk it is showed.

In [34] the author shows the conditions in which Monte Carlo method used in solving nonlinear equations systems has quadratic convergence as in Newton's method.

In paper [109] rigorous proofs on the global convergence of the proposed neural networks for solving systems are given.

The convergence of the hybrid algorithm proposed in paper [36] is proved there. The convergence demonstration is done both for recombination and for mutation of the evolutionary component of the hybrid algorithm.

In [59] a theorem that proves the convergence of the proposed neural-network algorithm for solving equations systems, is given and proved.

In [32] the convergence of the neural-network algorithms proposed is proved by the theorem given. For this, an error function and a Lyapunov function are defined.

The following papers refer to the convergence of evolutionary algorithms, which even if they do not demonstrate their convergence in solving the systems of equations in particular, should be taken into consideration because solving equations systems is most often seen as a problem of optimization.

Brain storm optimization (BSO) algorithm is analyzed from Markov model point of view in paper [125]. The proposed Markov model gives the theoretical probability of the occurrence of each possible population as the number of generation count goes to infinity. More, using the proposed model, the convergence of the BSO method is analyzed.

In paper [62] the extension of genetic algorithms with a probabilistic Boltzmann reduction operator is considered and for this case, the proving of their convergence to the optimum using Markov chain is made. Another theoretical part related to genetic algorithms convergence is developed in work [100]. In paper [26] the convergence properties of genetic algorithms are discussed and it is shown that running a genetic algorithm for a sufficiently long time, its convergence to global optimum is guaranteed. More, the theoretical aspects can be used to obtain efficient implementations of Figure 1:Papers about solving equations systems using AI genetic algorithms. In paper [63] Hamming distance is used to predict time convergence and average fitness of the genetic algorithms. Also, the worst case time complexity is provided.

A review of convergence analysis of PSO algorithms is done in paper [105]. There are some modes reviewed in order to analyze the convergence of PSO: with constriction coefficients, with limits, with differential equations, with matrices, with difference equations, etc. A relationship between particle velocity and convergence of PSO is given in [116]. In [117], convergence and rates of convergence of PSO method are analyzed using stochastic approximation methods. This was possible after the PSO procedure was rewritten as a stochastic approximation type iterative algorithm.

The ACO algorithm convergence is analyzed in paper [10]. Regarding to the ACO algorithm capability of solving a given problem, the term "failure probability" is used. This probability must be very close to zero for a good convergence of the algorithm. In that paper both types of convergence are analyzed: value and model, the last being stronger. The time needed for ACO convergence is also discussed and estimated in paper.

The global convergence analysis of Cuckoo Search method using Markov theory is done in paper [111]. It was proved that using a stochastic approach, the algorithm converges to the optimal state set. Also, it was shown that if two convergence conditions are satisfied, the global convergence of the algorithm is guaranteed.

The system of equations preconditioning is the transformation of the associated matrix to improve the efficiency/convergence in solving the system. The preconditioning process is applied especially in case of iterative methods to improve their convergence. The most used preconditioning technique seems to be the bandwidth reduction, for which the latest proposed methods are also based on artificial intelligence techniques [13],[70]. In paper [7] it is mentioned that the preconditioning guarantees the convergence of the Monte Carlo method, if the associated matrix of the system is transformed in strictly diagonally dominant, generalized diagonally dominant or block diagonally dominant.

5. CONCLUSION AND FUTURE WORK

In this work, more than 70 papers which proposed solving equations systems problem with techniques from artificial intelligence, were reviewed. Even there are a lot of performant iterative methods for this problem, the interest on using methods inspired from artificial intelligence was growing, especially in last decade, as it could be seen in final of section 3.

As it can be seen in Figure 1, metaheuristic methods dominate, but at the same time, hybridization between metaheuristic and iterative methods can give the best results in terms of time complexity, accuracy and convergence, as can be seen in reported results of many authors.

Unfortunately, there are not very rigorous mathematical formalisms regarding the convergence and complexity of all non-traditional methods, as it is in the case of iterative methods. Even in some papers the results obtained with these non-traditional methods are compared with the results obtained with iterative methods and they seem to be in favor of the first ones, we consider that the phenomenon cannot be generalized as long as there is no theoretical approach to prove this, at least for some classes of equation systems. We do not want to deny the power of artificial intelligence in solving equations systems, we just say that a serious and deep theoretical approach is needed.

Thus, the study of convergence and complexity in solving equations systems of some non-traditional methods will be one of our future concerns. Of course, a complete study of hybrid methods must be done, especially from global running time, i.e. the sum between time needed for iterative method and time needed for metaheuristic in case of hybridization is less than or greater than either of the two. A study on the influence of preconditioning on the convergence speed for metaheuristic methods and hybrid methods will be another concern, because we consider that the parallelization of these methods could be influenced by preconditioning.

REFERENCES

[1] Mohamed Abdel-Baset, Ibrahim M. Hezam, A hybrid flower pollination algorithm for solving ill-conditioned set of equations, Int. J. Bio-Inspired Computation, 8/4, 2016 [2] Mahdi Abdollahi, Asgarali Bouyer, Davoud Abdollahi, Improved cuckoo optimization algorithm for solving systems of nonlinear equations, J Supercomput 72:1246-1269, Springer 2016

[3] Abdollahi M, Isazadeh A, Abdollahi D, Imperialist competitive algorithm for solving systems of nonlinear equations, Elsevier Comput Math Appl 65:1894-1908, Elsevier 2013

[4] I. Amaya, J. Cruz, and R. Correa, Real Roots of Nonlinear Systems of Equations Through a Metaheuristic Algorithm, Revista Dyna, 78/170, 15-23, 2011.

[5] E. Atashpaz-Gargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition, in: IEEE Congress on Evolutionary Computation, 2007, 4661-4667, 2007

[6] Benioff Paul, The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines, Journal of statistical physics. 22 (5): 563-591, Springer 1980

[7] Michele Benzi, T. M. Evansm, S. P. Hamilton, M. L. Pasini, S.R. Slattery, Analysis of Monte Carlo accelerated iterative methods for sparse linear systems, Numer Linear Algebra Appl. 2017

[8] Birbil, S.I., Fang, S.C.: An electromagnetism-like mechanism for global optimization, J. Glob. Optim. 25, 263-282, 2003

[9] R. Brits, A.P. Engelbrecht, F. van den Bergh, Solving systems of unconstrained equations using particle swarm optimization, IEEE International Conference on Systems, Man and Cybernetics 2002, ISBN: 0-7803-7437-1, IEEE CPS 2013

[10] Lorenzo Carvelli, Giovanni Sebastiani, Some Issues of ACO Algorithm Convergence, Ant Colony Optimization - Methods and Applications, https://www.intechopen.com/, 2011

[11] Y. Censor and T. Elfving, New method for linear inequalities, Linear Alg. Appl., vol. 42, 199-211, 1982.

[12] Shi Cheng, Yifei Sun, Junfeng Chen, Quande Qin, Xianghua Chu, Xiujuan Lei, and Yuhui Shi, A Comprehensive Survey of Brain Storm Optimization Algorithms, Evolutionary Computation (CEC), 2017 IEEE Congress, 5-8 June 2017 San Sebastian Spain, DOI:10.1109/CEC.2017.7969498, IEEE 2017

[13] P. Z Chinn, J. Chvatalova, A.K. Dewdney, N.E. Gibbs, The bandwidth problem for graphs and matrices--a survey, Journal of Graph Theory, October 2006, DOI: 10.1002/jgt.3190060302, 2006

[14] Chiwen Qu, Wei He, A Double Mutation Cuckoo Search Algorithm for Solving Systems of Nonlinear Equations, International Journal of Hybrid Information Technology 8/12, 433-448, 2015

[15] A. Cichocki, R. Unbehauen, Neural networks for solving systems of linear equations and related problems, IEEE Trans. on Circuits and Systems-I, vol. 39, no. 2, Feb. 1992, 124-138, 1992

[16] Daniel Ciuiu, Solving nonlinear systems of equations and nonlinear systems of differential equations by the Monte Carlo method using queueing networks and games theory, Analele Universitatii Bucuresti, Seria Informatica 1, 111-125., 2010

[17] Maurice Clerc, Particle Swarm Optimization, Wiley, ISBN: 978-1-118-61397-9, 2013

[18] Coello Coello, Carlos, Lamont, Gary B., van Veldhuizen, David A., Evolutionary Algorithms for

Solving Multi-Objective Problems, ISBN 978-0-387-36797-2, Springer 2007

[19] Al Dahoud Ali, Ibrahiem M. M. El Emary, Mona M. Abd El-Kareem, Application of Genetic Algorithm in Solving Linear Equation Systems, MASAUM Journal of Basic and Applied Science, 1/2, 2009

[20] L.A. Daoud, Quantum Computing for Solving a System of Nonlinear Equations over GF(q), The International Arab Journal of Information Technology, 4/3, 201-205, 2007

[21] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 26/1, February 1996, 29-41, IEEE 1996

[22] El-Emary I.M.M., Abd El-Kareem M.M., Towards Using Genetic Algorithm for Solving Nonlinear Equation Systems, World Applied Sciences Journal 5/3,. 282-289, 2008

[23] Ermakov S., Kaloshin I., Solving the Nonlinear Algebraic Equations with Monte Carlo Method, In: Balakrishnan N., Melas V.B., Ermakov S. (eds) Advances in Stochastic Simulation Methods. Statistics for Industry and Technology. Birkhauser, Boston, MA, 2000

[24] T.A. Feo and M.G.C. Resende, Greedy randomized adaptive search procedures, Journal of Global Optimization, 6:109-133, 1995

[25] Z.W. Geem, J.H. Kim and G. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation, 76/2, 2001, 60-68, 2001

[26] David Greenhalgh, Stephen Marshall, Convergence Criteria for Genetic Algorithms, SIAM Journal on Computing, 30/1, 269-282, 2000

[27] Glantz, Stanton A.; Slinker, B. K. (1990). Primer of Applied Regression and Analysis of Variance. McGraw-Hill. ISBN 0-07-023407-8., 1990

[28] Wenyin Gong, Yong Wang, Zhihua Cai, and Shengxiang Yang, A Weighted Biobjective Transformation Technique for Locating Multiple Optimal Solutions of Nonlinear Equation, IEEE Transactions on Evolutionary Computation - October 2017

[29] K. Goulianas, A. Margaris, M. Adamopoulos, Finding all real roots of 3x3 nonlinear algebraic systems using neural networks, Applied Mathematics and Computation 219, 4444-4464, Elsevier 2013

[30] K. Goulianas, A. Margaris, I. Refanidis, K. Diamantaras, Solving polynomial systems using a fast adaptive back propagation-type neural network algorithm, European Journal of Applied Mathematics, https://doi.org/10.1017/S0956792517000146, 2017

[31] Grosan C., Abraham A., A New Approach for Solving Nonlinear Equations Systems, IEEE Transaction on Systems, Man and Cybernetics, part A: Systems and Humans, 38/3, 2008

[32] Xiangde Guo, Zhezhao Zeng, The Neural-Network Approaches to Solve Nonlinear Equation, Journal Of Computers, 5/3, March 2010

[33] Duan Haibin, Wang Daobo, Zhu Jiaqiang, Novel method based on ant colony optimization for solving illconditioned linear systems of equations, Journal of Systems Engineering and Electronics, 16/3, 2005

[34] John H. Halton, On Accelerating Monte Carlo Techniques for Solving Large Systems of Equations, TR96-041, Dept. of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3175, 1996 [35] K.H. Han and J.H. Kim, Quantum-inspired evolutionary algorithm for a class of combinatorial optimization, IEEE Trans. Evol. Comput., 6/6, 580-593, Dec. 2002.

[36] Jun He, Jiyou Xu and Xin Yao, Solving Equations by Hybrid Evolutionary Computation Techniques, IEEE Transactions on Evolutionary Computation, 4/3, 295 - 304, ISSN: 1089-778X, 2000

[37] Henderson N, Sacco WF, Platt GM, Finding more than one root of nonlinear equations via a polarization technique: an application to double retrograde vaporization, Chem Eng Res Des 88:551-561, Elsevier 2010

[38] Michael J.Hirsch, Panos M.Pardalos, Mauricio G.C.Resende, Solving systems of nonlinear equations with continuous GRASP, Nonlinear Analysis: Real World Applications, 10/4, Elsevier 2009

[39] Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, MIT Press Cambridge, MA, ISBN:0262082136, 1992

[40] Hooke, R.; Jeeves, T.A., "Direct search" solution of numerical and statistical problems, Journal of the Association for Computing Machinery (ACM). 8 (2): 212-229. doi:10.1145/321062.321069, 1961

[41] T. Nguyen Huu and H. Tran Van, A new probabilistic algorithm for solving nonlinear equations systems, Journal of Science, vol. 30, 1-14, 2011

[42] Ikotun Abiodun M., Lawal Olawale N., Adelokun Adebowale P., The Effectiveness of Genetic Algorithm in Solving Simultaneous Equations, International Journal of Computer Applications (0975 - 8887) 14/8, 2011

[43] A M Ikotun, A T Akinwale, O T Arogundade, Parameter Variation For Linear Equation Solver Using Genetic Algorithm, Journal Of Natural Sciences Engineering And Technology, 15/2, 2016

[44] Majid Jaberipour, Esmaile Khorrama, Behrooz Karimi, Particle swarm algorithm for solving systems of nonlinear equations, Computers and Mathematics with Applications 62, 566-576, Elsevier, 2011

[45] Aleksander Jablonski, A Monte Carlo algorithm for solving systems of non-linear equations, Journal of Computational and Applied Mathematics, 6/3, 171-175, Elsevier 1980

[46] A.R. M. Jalal Uddin Jamali, M. M. A. Hashem, Md. Bazlar Rahman, Solving Linear Equations Using a Jacobi Based Time-Variant Adaptive Hybrid Evolutionary Algorithm, Procs. of the 7th International Conference on Computer & Information Technology (ICCIT 2004), 688-693, Dhaka, Bangladesh, December 26-28, 2004

[47] Hao Ji, Yaohang Li, Reusing Random Walks in Monte Carlo Methods for Linear Systems, Procedia Computer Science 9, 383 - 392, Elsevier 2012

[48] Ruimin Jia, Dengxu He, Hybrid Artificial Bee Colony Algorithm for Solving Nonlinear System of Equations, Eighth International Conference on Computational Intelligence and Security, IEEE 2012

[49] D. Karaboga, An idea based on bee swarm for numerical optimization, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005.

[50] Karr CL, Weck B, Freeman LM, Solutions to systems of nonlinear equations via a genetic algorithm Engineering Applications of Artificial Intelligence 1998; 11:369-375, Elsevier 1998

[51] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, ISBN-13: 978-0-898713-52-7, Philadelphia, 1995

[52] J. Kennedy and R. Eberhart, Particle swarm optimization, in Proceedings of IEEE International Conference on Neural Networks, pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.

[53] Krishnanand K.N., Ghose D., Glowworm swarm optimisation: a new method for optimising multi-modal functions, Int. J. Computational Intelligence Studies, 1/1, 93-119, 2009

[54] D.P. Kroese, T. Taimre, Z.I. Botev, Handbook of Monte Carlo Methods, Wiley Series in Probability and Statistics, John Wiley and Sons, New York, 2011 [55] Kuri, A., Gutierrez, Jesus, Penalty Function Methods for Constrained Optimization with Genetic Algorithms: A Statistical Analysis, Lecture Notes in Artificial

Intelligence, LNAI 2313, Springer, 108-117, 2002.

[56] Angel Fernando Kuri-Morales, Rio Hondo No, DF Mexico, Solution of Simultaneous Non-Linear Equations using Genetic Algorithms, WSEAS Transactions on Systems 2/1, 2003

[57] Lee, C. Y., Yao, X.: Evolutionary Algorithms with Adaptive Levy Mutations, Proceedings of the 2001 Congress on Evolutionary Computation, pp.568-575, IEEE CPS, 2001

[58] Li, X., Shao, Z., Qian, J., An optimizing method based on autonomous animats: Fish-swarm algorithm,. Systems Engineering and Theory and Practice 22/11, 32-38, 2002 [59] Li G, Zeng Z (2008) A neural-network algorithm for solving nonlinear equation systems. In: IEEE International Conference On Computational Intelligence And Security, vol 1, 20-23, 2008

[60] Li P., Tong X., An Adaptive Genetic Algorithm for Solving Ill-Conditioned Linear Equation Group. In: Wu Y. (eds) Software Engineering and Knowledge Engineering: Theory and Practice. Advances in Intelligent and Soft Computing, vol 114. Springer 2012

[61] Chun-an Liu, A Special Multiobjective Evolutionary Algorithm for Solving Complicated Nonlinear Equations Systems, Applied Mechanics & Materials, Vol. 701/702, 18-23, 2014

[62] J.A. Lozano, P. Larranaga, M. Grana, F.X. Albizuri, Genetic algorithms: bridging the convergence gap, Theoretical Computer Science 229, 11-22, Elsevier 1999

[63] Sushil J. Louis and Gregory J. E. Rawlins, Technical Report TR370: Predicting Convergence Time for Genetic Algorithms, Indiana University 1993

[64] Luo YZ, Tang GJ, Zhou LN, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method. Applied Soft Computing 8, 1068-1073, Elsevier 2008

[65] X. F. Mai and L. Li, Bacterial Foraging Algorithm Based on PSO with Adaptive Inertia Weigh for Solving Nonlinear Equations Systems, Advanced Materials Research, Vols. 655-657, 940-947, 2013

[66] Amin Majd et all, Multi-Population Parallel Imperialist Competitive Algorithm for Solving Systems of Nonlinear Equations, 2016 International Conference on High Performance Computing & Simulation (HPCS) 18-22 July 2016, IEEE CPS 2016

[67] L.O. Mafteiu-Scai, E.J. Mafteiu-Scai, Solving liniar systems of equations using a memetic algorithm, IJCA (0975 - 8887) 58/13, ISBN: 973-93-80870-43-5 FCS New-York, November 2012

[68] L. O. Mafteiu-Scai, Improved the convergence of iterative methods for solving systems of equations by memetics techniques, IJCA (0975 - 8887), 64/17, ISBN: 973-93-80873-17-5 FCS New-York, February 2013 [69] L.O. Mafteiu-Scai, A New Approach for Solving Equations Systems Inspired from Brainstorming, IJNCAA 5/1, 10-18, ISSN: 2220-9085, 2015

[70] L.O. Mafteiu-Scai, The Bandwidths of a Matrix. A Survey of Algorithms, West Univ. of Timisoara Annals, ISSN:1841-3293, DeGruyter Open, 2015

[71] K.G.Margaritis K.G, M.Adamopoulos, K.Goulianas, Solving linear systems by artificial neural network energy minimisation", 1993, Universily of Macedonia, Annals (volume XII), 502-525, (in Greek), 1993

[72] A. Margaris, M. Adamopoulos, Solving nonlinear algebraic systems using artificial neural networks, In Proc. of the 10th Int. Conf. on Engineering App. of Artificial Neural Networks, 107-120, 2007

[73] M. Mascagni, A. Karaivanova, A Parallel Quasi-Monte Carlo Method for Solving Systems of Linear Equations, Proc. of International Conference on Computational Science (ICCS02), Springer, 2002

[74] Nikos E. Mastorakis, Solving Non-linear Equations via Genetic Algorithms, Proceedings of the 6th WSEAS Int. Conf. on EVOLUTIONARY COMPUTING, Lisbon, Portugal, June 16-18, 24-28, 2005

[75] Karl Mathia, Richard Saeks, Solving Nonlinear

Equations Using Recurrent Neural Networks, World

Congress on Neural Networks (WCNN'95), July 17-21, 1995

[76] A.R.Mehrabian, C.Lucas, A novel numerical optimization algorithm inspired from weed colonization, Ecological Informatics, 1/4, December 2006, 355-366, m Elsevier 2006

[77] Zhiki Min, Yueguang Li, A Novel Ant Colony Algorithm of Solving Nonlinear Equation Group, IERI Procedia 2, 775 - 780, Elsevier 2012

[78] Yuanbin Mo, Hetong Liu, Qin Wang, Conjugate direction particle swarm optimization solving systems of nonlinear equations, Computers and Mathematics with Applications 57, 1877-1882, Elsevier 2009

[79] Moosavian N., Jaefarzadeh, M.R., Particle Swarm Optimization for Hydraulic Analysis of Water Distribution Systems, Civil Engineering Infrastructures Journal, 48(1): 9-22, ISSN: 2322-2093, 2015

[80] I. Moser, Hooke-Jeeves Revisited, IEEE Congress on Evolutionary Computation (CEC 2009), IEEE 2010

[81] Nelder, J.A. and Mead, R.. A simplex method for function minimization. Comput. J. 7, 1965, 308-313, 1965

[82] Ferrante Neri, Carlos Cotta, and Pablo Moscato (Eds.), Handbook of Memetic Algorithms, Studies in Computational Intelligence, ISBN 978-3-642-23246-6, Springer 2012

[83] Nguyen TT. Neural network architecture for solving nonlinear equation systems, Electronics-Letters 1993; 29/16, 1403-1405, IEEE 1993

[84] K. M. Passino, Bacterial Foraging Optimization, Int. Journal of Swarm Intelligence Research, 2010.

[85] Abolfazl Pourrajabian, Reza Ebrahimi, Masoud Mirzaei, Mehrzad Shams, Applying genetic algorithms for solving nonlinear algebraic equations, Applied Math. and Computation, 219/24, 11483-11494, Elsevier 2013

[86] Pourjafari E, Mojallali H., Solving nonlinear equations systems with a new approach based on invasive weed optimization algorithm and clustering, Swarm Evol Comput 4:33-43, Elsevier 2012.

[87] William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery, Numerical Recipes, The Art of Scientific Computing, 3th. ed, ISBN-13 978-0-521-88068-8, Cambridge Univ. Press, 2007

[88] Liangdong Qu, Dengxu He, Jinzhao Wu, Hybrid Coevolutionary Glowworm Swarm Optimization Algorithm with Simplex Search Method for System of Nonlinear Equations, Journal of Information & Computational Science 8: 13, 2011

[89] Alfio Quarteroni, Riccardo Sacco, Fausto Saleri, Numerical Mathematics, ISBN 0-387-98959-5 SPIN 10747955, Springer 2000

[90] Rajabioun R, Cuckoo optimization algorithm, Appl Soft Comput 11:5508-5518, 2011

[91] Ramadas, G.C.V., Fernandes, E.M.G.P.: Combined mutation differential evolution to solve systems of nonlinear equations. In: 11th International Conference of Numerical Analysis and Applied Mathematics 2013 AIP Conf. Proc., vol. 1558, 582-585, 2013

[92] Gisela C. V. Ramadas, Edite M.G.P. Fernandes, Solving Systems of Nonlinear Equations by Harmony Search, Proceedings of the 13th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2013 24-27 June, 2013.

[93] Ramadas G.C.V., Fernandes E.M.G.P., Rocha A.M.A.C., Multiple Roots of Systems of Equations by Repulsion Merit Functions, In: Murgante B. et al. (eds) Computational Science and Its Applications--ICCSA 2014, LNCS, vol 8580. Springer 2014

[94] Hongmin Ren, Long Wu, Weihong Bi, Ioannis, K. Argyros, Solving nonlinear equations system via an efficient genetic algorithm with symmetric and harmonious individuals, Applied Mathematics and Computation, 219/23/1, 10967-10973, Elsevier 2013

[95] Natalia Rosca, Monte Carlo Methods For Systems Of Linear Equations, Studia Univ. "Babes--Bolyai", Mathematica, LI/1, 2006

[96] Antonio Rovira, Manuel Valdes, Jesus Casanova, A new methodology to solve non-linear equation systems using genetic algorithms. Application to combined cycle gas turbine simulation, Int. J. Numer. Meth. Engng 2005; 63:1424-1435, Wiley InterScience DOI: 10.1002/nme. 1267, 2005

[97] Saad Yousef, Iterative methods for sparse linear systems, (2nd ed.). Philadelphia, Pa.: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-534-7, 2003 [98] M. Sammany, E. Pelican, T. A. Harak, Hybrid neuro-genetic based method for solving ill-posed inverse problem occurring in synthesis of electromagnetic fields, Computing, 9/4, Springer 2011

[99] N.D.K. Al-Shakarchy, E.H. Abd, Application of Neural Equations, Journal of K erbala University Network For Solving Linear Algebraic, Vol. 10 No.4 Scientific, 2012

[100] Sharapov, R.R. & Lapshin, A.V. Pattern Recognit. Image Anal. (2006) 16: 392. https://doi.org/10.1134/S1054661806030084, Springer 2006

[101] Kuntjoro Adji Sidarto and Adhe Kania, All Solutions of Systems of Nonlinear Equations Using Spiral Dynamics Inspired Optimization with Clustering, JACIII Vol.19 No.5 pp. 697-707doi: 10.20965/jaciii.2015.p0697, 2015

[102] Storn, R.; Price, K., Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization. 11: 341-359, Springer 1997

[103] Tabor, M. Chaos and Integrability in Nonlinear Dynamics: An Introduction, New York: Wiley, 1989

[104] K. Tamura and K. Yasuda, Spiral Dynamics Inspired Optimization, J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.15, No.8, pp. 1116-1122, 2011

[105] Dong ping Tian, A Review of Convergence Analysis of Particle Swarm Optimization, International Journal of Grid and Distributed Computing,6/6, pp.117-128, http://dx.doi.org/10. 14257/ijgdc.2013.6.6.10, 2013

[106] Toutounian, F., Saberi-Nadjafi, A Hybrid of the Newton-GMRES and Electromagnetic Meta-Heuristic Methods for Solving Systems of Nonlinear Equations, J. & Taheri, S.H. J Math Model Algor (2009) 8: 425. https://doi.org/10.1007/s10852-009-9117-, Springer 2009

[107] I.G. Tsoulos, Athanassios Stavrakoudis, On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods, Nonlinear Analysis: Real World Applications 11 2465-2471, Elsevier 2010

[108] Oguz Emrah Turgut, Mert Sinan Turgut, Mustafa Turhan Cobana, Chaotic quantum behaved particle swarm optimization algorithm for solving nonlinear system of equations, Computers and Mathematics with Applications 68 (2014) 508-530, Elsevier 2014

[109] Youshen Xia, Jun Wang, Donald L. Hung, Recurrent Neural Networks for SolvingLinear Inequalities and Equations, IEEE Transactions On Circuits And Systems--I: Fundamental Theory And Applications, 46/4, 1999

[110] Y. H. Xia and Y. G. Li, An Improved Quantum Ant Colony Algorithm of Solving Nonlinear Equation Groups, Advanced Materials Research, Vols. 1049-1050, pp. 1363-1366, 2014

[111] He XS., Wang F., Wang Y., Yang XS. Global Convergence Analysis of Cuckoo Search Using Markov Theory. In: Yang XS. (eds) Nature-Inspired Algorithms and Applied Optimization. Studies in Computational Intelligence, vol 744. Springer, Cham, 2018

[112] X.-S. Yang; S. Deb (December 2009). Cuckoo search via Levy flights. World Congress on Nature & Biologically Inspired Computing (NaBIC 2009). IEEE Publications. pp. 210-214. arXiv:1003.1594v1, 2009 [113] Yang, X.S.: Nature-inspired metaheuristic algorithms, second edition, Luniver Press, Beckington, 2010

[114] Yan Yang, Yongquan Zhou, Qiaoqiao Gong, Hybrid Artificial Glowworm Swarm Optimization Algorithm for Solving System of Nonlinear Equations, Journal of Computational Information Systems 6:10, 3431-3438, 2010

[115] Yang, X-S. (2012) Flower pollination algorithm for global optimization, Unconventional Computation and Natural Computation, pp.240-249, Springer, 2012

[116] Hongtao Ye, Wenguang Luo, Zhenqiang Li, Convergence Analysis of Particle Swarm Optimizer and Its Improved Algorithm Based on Velocity Differential Evolution, Comput Intell Neurosci., 2013

[117] Quan Yuan, George Yin, Analyzing Convergence and Rates of Convergence of Particle Swarm Optimization Algorithms Using Stochastic Approximation Methods, IEEE Transactions on Automatic Control, 60/7, 2015

[118] Wang Q., Zeng J., Jie J. Modified Particle Swarm Optimization for Solving Systems of Equations, In: Huang DS., Heutte L., Loog M. (eds) Advanced Intelligent Computing Theories and Applications. ICIC 2007. Communications in Computer and Information Science, vol 2. Springer 2007

[119] Xiaogang Wang, Ning Zhou, Pattern Search Firefly Algorithm for Solving Systems of Nonlinear Equations, 2014 Seventh International Symposium on Computational Intelligence and Design, DOI: 10.1109/ISCID.2014.222, IEEE 2014

[120] Dennis Weyland, A critical analysis of the harmony search algorithm--How not to solve Sudoku, Operations Research Perspectives 2 (2015) 97-10, Elsevier 2015

[121] Wu Z, Kang L. A fast and elitist parallel evolutionary algorithm for solving systems of non-linear equations. Congress on Evolutionary Computation, IEEE Cat. No. 03TH8674., 1026-1028, vol. 2., 2003

[122] Zhou Y., Huang H., Zhang J., Hybrid Artificial Fish Swarm Algorithm for Solving Ill-Conditioned Linear Systems of Equations. In: Chen R. (eds) Intelligent Computing and Information Science. Communications in Computer and Information Science, vol 134. Springer, 2011

[123] Yongquan Zhou, Jiakun Liu, Guangwei Zhao, Leader Glowworm Swarm Optimization Algorithm for Solving Nonlinear Equations Systems, przeglad elektrotechniczny (Electrical Review), ISSN 0033-2097, R. 88 NR 1b/2012, 2012

[124] Yongquan Zhou, Qifang Luo, Huan Chen, A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems, Hindawi Publishing Corporation, Journal of Applied Mathematics, Vol. 2013, http://dx.doi.org/10.1155/2013/757391, 2013

[125] Ziwei Zhou, Haibin Duan, Yuhui Shi, Convergence analysis of brain storm optimization algorithm, 2016 IEEE Congress on Evolutionary Computation (CEC), 24-29 July 2016, IEEE, 2016

Liviu Mafteiu-Scai, Emanuela Mafteiu, Roxana Mafteiu-Scai

Computer Science Department, West University of Timisoara, Timisoara, Romania

liviu.mafteiu@e-uvt.ro, emanuela.mafteiu@e-uvt.ro, roxana.mafteiu97@e-uvt.ro
Table 1. A short presentation of classical methods

                           Advantages and Disadvantages

DIRECT METHODS          - recommended for systems with dense matrix
                          coefficients, but not for systems greater
                          than 100
1.Gaussian elimination  - applicable to upper and lower triangular
                          systems.
                        - in cases of sparse matrices, the bandwidth
                          reduction improves the solving process.
                        - numerical errors: accumulation of
                          approximation errors in the last equation
                          for calculating [x.sub.n];
                        - the algorithm fails if the selected pivot is
                          zero or has very low values;
                        - full-pivoting is difficult to implement and is
                          unstable;
2.Gauss-Jordan          - eliminates the substitution phase;
                        - it is most used to determine the inverse
                          matrix;
3.LU factorization      - recommended for repeated solving systems that
                          differ only in terms of column free;
                        - the elimination phase is more complex;
                        - the number of operations is greater than in
                          Gaussian elimination;;
                        - numerical errors.
4. Cholesky             - for symmetric matrices, the computational
factorization             effort is reduced to half compared with Gauss
                          or LU methods;
                        - it is recommended for systems with positively
                          defined matrices;
                        - pivoting is not required;
                        - only the lower (or upper) triangle of matrices
                          must be stored, requiring only n(n + 1)/2
                          memory locations.
5. QR factorization     - it can be applied in the case of the singular
                          matrix;
                        - it is more stable than LU factorization;
                        - can also solve systems with rectangular
                          matrices;
                        - ensures a good accuracy.
                        - it is recommended for ill-conditioned systems.
                        - fails if matrix does not have full rank (all
                          equations must be linearly independent)
ITERATIVE               - are recommended for large systems of
METHODS                   equations: [10.sup.3]-[10.sup.6];
                        - negligible rounding errors;
                        - negligible truncation errors;
                        - simpler implementation;
1.Gauss-Seidel          - better convergence than Jacobi method;
                        - the implementation requires little memory
                          space;
                        - converges even when the Jacobi method does not
                          converge;
                        - very good accuracy;
                        - it is recommended for diagonal-dominant
                          systems.
                        - sufficient convergence conditions:
                        - symmetric, diagonal-dominant and positive
                          matrix
                        - sometimes it does not converge
2. Jacobi               - easy to understand and implement;
                        - very good accuracy;
                        - it is recommended for diagonal-dominant
                          systems.
                        - weak convergence or does not always converge.
3. Southwell            - for systems with positively defined and
                          nonsingular matrix and all elements on the
                          main diagonal equals.
4. Conjugate Gradient   - recommended for systems with sparse matrices
                          and each equation has a certain internal
                          regularity;
                        - small errors;
                        - fast convergence for well-conditioned systems;
                        - arbitrary, often weak, convergence for ill
                          conditioned systems;
                        - good convergence only for positively defined
                          and symmetric systems.
                        - generalized variants of method lead to lower
                          convergence.
5. Preconditioned       - recommended for poorly conditioned systems
Conjugate Gradient        with high number of conditions;
6. BiConjugate          - for systems with non-symmetrical and
Gradient                  non-singular matrices
7. Newton methods       - sometimes the convergence is
                          arbitrary/irregular and the method may fail.
                        - good convergence even when the condition
                          number of the matrix is very high
8. Broyden              - very good convergence, 2n steps for linear
                          systems with size n
9. Chebyshev            - very good convergence;
                        - it can also be applied to non-symmetrical
                          systems.
                        - only for systems with positively defined
                          matrices.
COPYRIGHT 2018 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Mafteiu-Scai, Liviu; Mafteiu, Emanuela; Mafteiu-Scai, Roxana
Publication:International Journal of New Computer Architectures and Their Applications
Article Type:Report
Date:Jul 1, 2018
Words:11177
Previous Article:Evaluation of Mobile Application Prototype in Context of Design Against Human and Computer Interaction.
Next Article:Content Based Image Retrieval using Multimodal Data based on CCA.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters