# Basins of attraction for various Steffensen-type methods.

1. IntroductionSuppose that we wish to find a solution [xi] of a nonlinear equation f(z) = 0 numerically, where z [member of] C and the function f : [psi] [subset or equal to] C [right arrow] C is an analytical complex function. Starting from some [z.sub.0], the Steffensen's method [1] uses the iteration:

[z.sub.n+1] = [z.sub.n] - f[([z.sub.n]).sup.2]/f([z.sub.n] + f([z.sub.n]) - f([z.sub.n]), n = 0, 1, 2, ... (1)

If the initial value [z.sub.0] is close enough to a simple root of f(z), this iteration converges quadratically (see [2, 3] for more information about Steffensen-type methods). If the root is not simple, the convergence becomes linear. Since then, a tremendous amount of effort has been made in the direction of improving the convergence and/or the simplicity of the method resulting in modified Steffensen method in the divided difference form [4] as follows (m is the multiplicity):

[z.sub.n+1] = [z.sub.n] - m f([z.sub.n])/f[[z.sub.n], [w.sub.n]], n = 0, 1, 2, ..., (2)

wherein [w.sub.n] = [z.sub.n] + f([z.sub.n]) and f[[z.sub.n], [w.sub.n]] is the two-point divided difference. Notice that the modification for multiple roots given by (2) is the same as that given by Schroder for Newton's method. We remark that the notation of divided differences will be used throughout this paper.

Note that the Steffensen method could be written in a more generalized form with one free nonzero parameter [5] as follows:

[z.sub.B+1] = [z.sub.n] - f([z.sub.n])/f[[z.sub.n],[w.sub.n]], n = 0, 1, 2, ..., (3)

wherein [w.sub.n] = [z.sub.n] + [B.sub.f]([z.sub.n]) and [beta] [member of] C {0}. It is known that this method has second order of convergence, for every nonzero value of the parameter [beta]. However, as we will see in this paper, the parameter [beta] plays an important role for choosing the initial estimation in the stability of the method, and so forth.

Usually, efficiency indices are used to compare the behavior of iterative methods. Traub in [5] used the operational efficiency index [p.sup.1/op], where p is the convergence rate and op is the number of products/quotients per iteration. Ostrowski in [6] defined the classical efficiency index [p.sup.1/d], wherein d is the number of functional evaluations per iteration. Similarly, one may use the informational index defined as p/d. In this work, we are going to introduce another criterion for comparing the iterative methods used: the basins of attraction, (see, e.g., [7-13]).

The dynamical analysis of the rational function associated with an iterative method gives us important information about its stability and reliability. There exists a huge number of publications related to the numerical and dynamical properties of iteration functions (see, e.g., [14-18]).

It is known that the basin of attraction of an iterative method on a particular function is an interesting tool to visually get to know how an iteration function behaves as a function of different initial points [19].

The remaining sections of this paper are organized as follows. Section 2 provides some basic concepts in order to deal with dynamics associated with iterative functions. It is followed by Section 3, wherein a discussion over the basins of attraction for the uniparametric Steffensen scheme (3) on higher degree polynomials, using the software Mathematica 8, is given. Section 4 gives the list of methods that we are going to compare and their dynamics on quadratic polynomials is analyzed. Next in Section 5, we present the basins of attraction for various high-order Steffensen-type methods. Some conclusions and discussions are illustrated in Section 6 to end the paper.

2. Basic Concepts

We need some basic definitions and notions before going to Section 3. Thus, now we remind them briefly.

Let R : [??] [right arrow] [??] be a rational map on the Riemann sphere. Then, a periodic point [z.sub.0] of period m is such that [R.sup.m]([z.sub.0]) = [z.sub.0], where m is the smallest such integer. If m = 1, [z.sub.0] is called fixed point of R and also point [z.sub.0] is called attracting if [absolute value of R'([z.sub.0])] < 1, repelling if [absolute value of R'([z.sub.0])] > 1, and neutral if [absolute value of R'([z.sub.0])] = 1. If the derivative is zero then the point is called super-attracting [20]. If R is the rational function associated with an iterative method on a function f, the fixed points of R different from the roots of f(z) = 0, are called strange fixed points.

Note that the Julia set of a nonlinear map R(z), denoted J(R), is the closure of the set of its repelling periodic points. The complementary of J(R) in the Riemann sphere is the Fatou set F(R).

A point [z.sub.0] is in the Julia set if and only if dynamics in a neighborhood of [z.sub.0] shows strong dependence on the starting conditions, so that nearby initial conditions yield to wildly different behavior after a number of full iterations.

Definition 1 (basin of attraction [21]). If a fixed point p of R is attracting, then all nearby points of p are attracted toward p under the action of R, in the sense that their iterates under R converge to p. The collection of all points whose iterates under R converge to p is called the basin of attraction of p, denoted by [B.sub.p] = {x [member of] C : [lim.sub.k[right arrow][infinity]m][R.sup.k](x) = p] with [R.sup.k](x) = R(R(...R(x)...)) as the k-fold composite map of x under R.

Lemma 2 (see [20]). Every attracting periodic orbit is contained in the Fatou set of R. In fact, the entire basin of attraction [B.sub.p], which is an open set, for an attracting periodic orbit is contained in the Fatou set. However, every repelling periodic orbit is contained in the Julia set.

Kalantari in [22] coined the term "polynomiography" to be the art and science of visualization in the approximation of roots of polynomial using iteration functions. Note that a polynomiograph may or may not result in a fractal image. Even when a polynomiograph is a fractal image it does not diminish its uniqueness.

As we endeavor to solve increasingly complex problems, computer algebra systems (CAS) are becoming more and more important to our work. We use CAS to help with calculations too time consuming. Mathematica 8 is one of the most popular CASs available today [23]. Due to its wide applicability and power along with easiness [24], we apply this programming package in finding the chaotic behaviors of Steffensen-type methods.

3. Basins of Attractions for the Uniparametric Steffensen Scheme

A wide dynamical study of Steffensen's method (1) on quadratic and cubic polynomials has been developed in [12]. In this paper, it was showed that no Scaling Theorem is possible for this derivative-free scheme and, therefore, the analysis was made in particular polynomials. In general, it can be concluded that the stability of derivative-free schemes is worse than the one of methods with derivatives. In fact, in the dynamical planes associated with some of these methods, frequently the basin of attraction of the infinity appears. In fact, the infinity can be a strange fixed point: if R(z) is the rational function associated to the method, then we prove that the infinity is a fixed point if Q(0) = 0, where Q(z) = 1/R(1/z). If this happens, the infinity (as any other strange fixed point) can be attractive, repulsive, or neutral. This character is obtained analyzing the value of Q'(0).

Before providing the chaotic behaviors of multipoint Steffensen-type methods, we conclude some points on the uniparametric Steffensen scheme (3).

Lemma 3. The associated operator of the uniparametric Steffensen scheme on [z.sup.2] -1 is [O.sub.st](x) = (1- [beta]x + [x.sup.2] + [beta][x.sup.3])/(-[beta] + 2x + [beta][x.sup.2]). The only strange fixed point of [O.sub.st](z) is the infinity, and its character is neutral.

This result can be understood as the behavior of the infinity as fixed point can be attractive or repulsive, and it can be observed also when this method is applied on other functions, as we will see in Figures 5 to 10.

Toward this end and heretofore, we take a rectangle D = [-3, 3] x [-3,3] [member of] C and we assign a color to each point [z.sub.0] [member of] D according to the simple root at which the corresponding iterative method starting from [z.sub.0] converges, and we mark the point as black if the method does not converge after the maximum number of iterations. In this way, we distinguish the attraction basins by their colors for different methods.

The criteria we have used in our Mathematica 8 codes [25-27] are that the maximum number of iterations is 100. That is to say, if the method does not reach the considered accuracy after 100 of its full iteration steps, we allocate the black color and also considered it as Not Convergent. The considered accuracy is the obtained residual of the function to be less than [10.sup.-4]. Note that the small white points will be shown in the exact location of the simple zeros in our fractal patterns.

In what follows, we define the test problems in this paper. For the first test, we have taken the following function with roots {-1., -i, i, 1.} :

[p.sub.1] (z) = [z.sup.4] - 1. (4)

The second test problem is a polynomial as follows when the simple zeros are {-2.20663,-0.45318-0.78493i, -0.45318 + 0.78493i, 0.906359,1.10332-1.911i,1.10332 + 1.911i}:

[p.sub.2] (z) = [z.sup.6] + 10[z.sup.3] - 8. (5)

The third test problem is chosen as follows:

[p.sub.3] (z) = [z.sup.4] - 1/z. (6)

The roots are {0.309017+0.951057i, 0.309017-0.9510577, 1., -0.809017 + 0.587785i, -0.809017 - 0.587785/}.

The fourth test problem is taken into account as

[p.sup.4] (z) = [z.sup.4] - z + i. (7)

The roots are {-0.759845 + 0.592595i, -0.532605 -1.08829i, 0.181924 + 0.732098I, 1.11052 - 0.236405i}.

The last test problem we have chosen is as follows when the roots are {-1.09112 + 0.629961i, 0. - 1.25992i, 1.09112 + 0.629961i}:

[p.sup.5] (z) = [z.sup.3] - 2i. (8)

We can observe on Figures 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 that scheme (3) for the choice [beta] = 0.001 is very robust in contrast to the other cases. Method (3) has different radii of convergence according to the free nonzero parameter [beta]. This is also true for all the test functions for Steffensen's scheme ([beta] = 1).

From the graphical comparisons in this section, it is obvious that in the Steffensen uniparametric scheme (3), the basins of attractions can be widely improved by choosing a small value for the nonzero free parameter [beta]. In fact, from the Taylor series expansion of the divided difference and the derivative around the solution % of the nonlinear equation, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)

Let us note that, when [beta] [right arrow] 0, both Taylor expansions coincide. Note that it is experimentally observable that (almost always) when the forward finite difference is used at the denominator of Steffensen's scheme, that is, f[[x.sub.n], [w.sub.n]] = (f([x.sub.n] + [beta]f([x.sub.n])) - f([x.sub.n]), then very small negative values for [beta] results in larger basins of attractions and higher speed, while when the backward finite difference is used at the denominator, that is, f[[x.sub.n],[w.sub.n]] = (f([x.sub.n]) - f([x.sub.n] - [beta]([x.sub.n])))/[beta]f([x.sub.n]), then very small positive values for [beta] yield in larger basins of attractions and higher speed.

This finding is very useful when we extend Steffensen's scheme for solving systems of nonlinear equations by defining the first order divided difference operator properly. In fact, by choosing very small values for the free nonzero parameter the convergence radii will be similar to Newton's method for solving systems of nonlinear equations, while there is no need to compute the Jacobian matrix which is costly.

Even by applying an iteration on [beta] and make the process with memory, we can obtain the cubical order of convergence. Another interesting point which should be included is the computational time required for obtaining the fractal behaviors of such schemes. In fact, in our numerical experiments, the computational CPU time has a dramatically fall when [beta] is chosen as a very small value while choosing [beta] = 1 takes more time to find the fixed points of the iteration functions.

Remark 4. The chaotic behavior of Steffensen's method can be simply improved by choosing very small entries for the free nonzero parameter (see Figures 6(c), 8(c), and 10(c)). In this case, the fractal behavior of the scheme tends to Newton's fractal. In fact, we want the methods to have very few black points on all examples not just one.

4. Behavior of Several Steffensen-Like Methods on Quadratic Polynomials

Many iterative methods have been improved by using various techniques [28, 29]. A drawback of Newton's method is that for many particular choices of the function f, especially in hard problems, the calculations of the derivatives take some deal of time. That is why higher order derivative-free methods are better root solvers and are in focus recently. The most important merit of Steffensen's method is that it has quadratic convergence like Newton's method. For this reason, in this section, we list some of the multipoint derivative-free methods we consider for comparisons and a brief dynamical analysis is made for these methods applied on quadratic polynomials.

Kung and Traub in the pioneer paper [30] provided the following two- and three-step derivative-free families ([beta] [member of] C {0}) of methods with orders four and eight, respectively,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

We would like to study the general convergence of methods (10) and (11) for quadratic polynomials. To be more precise (see [31, 32]), a given method is generally convergent if the scheme converges to a root for almost every starting point and for almost every polynomial of a given degree. The main problem is that no Scaling Theorem can be stated for derivative-free methods (see, e.g., [33]). So, only the behavior on specific polynomials can be analyzed. In this paper, the polynomial p(z) = [z.sup.2] - 1 will be used for this purpose. Therefore, let us denote by [O.sub.KT4](z) ([O.sub.KT8](z)) the operator associated with the fourth-order (resp., eighth-order) scheme by Kung and Traub on p(z).

The fixed points of [O.sub.KT4](z) are the roots of the equation [O.sub.KT4](z) = z, that is, z = -1, z = 1, and the strange fixed points that are the zeros of the polynomial [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The stability of the strange fixed points can be deduced from graphical analysis of the respective stability function, that is, representing graphically the regions of the complex plane in which the absolute value of the derivative of the operator evaluated at the strange fixed point is lower than one.

Lemma 5. The number of simple strange fixed points of operator [O.sub.KT4](z) is eight, and their stability is described in the following cases.

(i) Four of them are always repulsive, so they remain in the Julia set.

(ii) Two of them can be attractive (but not super attractive) in a small complex neighborhood of the origin.

(iii) Finally, two of the strange fixed points are attractive in complex region around the origin and are super attractive for the following values of the parameter: [beta] [approximately equal to] -0.406175 - 1.63893i, [beta] [approximately equal to] -0.406175 + 1.63893?', [beta] [approximately equal to] 0.276582 - 0.409103i, and [beta] [approximately equal to] 0.276582 + 0.409103*'.

The stability region of the complex plane where these four fixed points are attractive is represented in Figure 11.

In order to determine the critical points, we calculate the first derivative of [O.sub.KT4] (z). A classical result establishes that there is at least one critical point associated with each invariant Fatou component. It is clear that z = -1 and z = 1 are critical points and give rise to their respective Fatou components, but there exist in the family, some free critical points, that is, critical points different from the roots, some of them depending on the value of the parameter. If any of these free critical points is near a stable strange fixed point, then the last one would have its own basin of attraction.

Lemma 6. Analyzing the equation [O'.sub.KT4](z) = 0, we obtain fourteen free critical points, roots of the polynomial:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (12)

Moreover, we can state the following facts.

(a) There is no value of [beta] that makes coincide a free critical point with a strange fixed point.

(b) For [beta] [approximately equal to] 0.05 one free critical point coincides with the root z = -1, so in this case the number of free critical points is reduced to 13.

The real values of the critical points for a real range of values of the parameter are showed in Figure 12.

Let us state that the strange fixed points of [O.sub.KT8](z) are the zeros of a polynomial of degree 39, whose coefficients depend on [beta].

The stability of the strange fixed points is described in the following lemma.

Lemma 7. The number of simple strange fixed points of operator [O.sub.KT8](z) is thirty-nine, and their stability is described in the following cases.

(i) Three of them are always repulsive, so they remain in the Julia set.

(ii) The other strange fixed points can be attractive or super attractive, in different complex regions.

The stability region of some of these fixed points is represented in Figure 13.

By analyzing these stability functions, it is deduced that there are wide regions of the complex plane where it is easy to find two or more attractive strange fixed points (as, e.g., [beta] = -5i and [beta] = 5i). Moreover, there exist also wide regions of stability near the origin. In fact, no attractive strange fixed points can be found for [absolute value of [beta]] < 3/2.

Analyzing the equation [O'.sub.KT8](z) = 0, we obtain sixty free critical points that will coincide with some of the strange fixed points in case these are super attracting.

An optimal fourth-order derivative-free method [34] was introduced by Liu et al. in the following form:

[y.sub.n] = [n.sub.x] f[([x.sub.n]).sup.2]/f([x.sub.n] + f ([x.sub.n])) - f ([x.sub.n]), [x.sub.n+1] = [y.sub.n] f[[x.sub.n], [y.sub.n]] - f[[y.sub.n], [z.sub.n]] + f[[x.sub.n], [z.sub.n]]/ f[[[x.sub.n], [y.sub.n]].sup.2] f(yn), (13)

where [z.sub.n] = [x.sub.n] + f([x.sub.n]). It is denoted by L4. The rational function associated with this method on p(z) is the operator [O.sub.L4](z),

[O.sub.L4] (z) = (1-5z+ 19[z.sup.2] - 27z -[z.sup.3] - [z.sup.4] + 23[z.sup.5] + 5[z.sup.6] + 7[z.sup.7] + 8[z.sup.8] + 2[z.sup.9]) x[((-1 + 2z + [z.sup.2]) [(1-2z + 3[z.sup.2] + 2[z.sup.3]).sup.2]).sup.-1] (14)

Lemma 8. The quantity of simple strange fixed points of operator [O.sub.L4](z) is seven, {-3.4251,-1.98276 - 0.105743i, -1.98276 + 0.105743i, 0.109953 - 0.282276i,0.109953 + 0.2822761, 0.585354 - 0.246666i, 0.585354 + 0.246666i}, and their stability is described in the following cases.

(i) All the complex strange fixed points are repulsive, so they remain in the Julia set.

(ii) The real strange fixed point is an attractor, but not super attractor.

Lemma 9. The equation 0'L4(z) = 0yields the poly-nomial 1+ 5x - 31[x.sup.2] + 23[x.sup.3] + 64[x.sup.4] + 30[x.sup.5] + 4[x.sup.6] whose roots are the free critical points, {-3.94434,-2.25291,-1.85186, -0.114261, 0.331686 - 0.151469i, 0.331686 + 0.151469i}.

Let us remark that the existence of a free critical point near the real attracting fixed point will derive its own basin of attraction, as can be seen in Figure 14.

Cordero et al. in [35] proposed a seventh-order method, which is free from derivative:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (15)

wherein [w.sub.n] = [x.sub.n] + f([x.sub.n]). We will denote it by C7.

The rational function associated C7 on p(z) is [O.sub.C7](z),

[O.sub.C7] (Z) = (1 - 9z + 13[z.sup.2] + 65[z.sup.3] - 112[z.sup.4] - 64[z.sup.5] + 119[z.sup.6] - 22[z.sup.7] - 98[z.sup.8] + 71[z.sup.9] + 152[z.sup.10] + 85[z.sup.11] + 21[z.sup.12] +2[z.sup.13]) x [([(-1 + 2z + [z.sup.2]).sup.3] x(1 - 4z + 14[z.sup.2] + 8[z.sup.3] + 23[z.sup.4] + 12[z.sup.5] + 6[z.sup.6])).sup.-1]. (16)

Lemma 10. The simple strange fixed points of operator 0C7 (z) are

(i) {-2.30882 - 0.732518i, -2.30882 + 0.732518i, -2.05354 - 0.276071, -2.05354 + 0.276071, -1.87555,-0.411291, 0.200847 - 0.0392285i, 0.200847 + 0.0392285, 0.638264 - 0.0474315, 0.638264 + 0.0474315i }, and all of them are finite and repulsive as the derivable operator evaluated in each of them is greater than one, in absolute value;

(ii) the infinity is also a strange fixed point and its character is neutral.

Lemma 11. The free critical points of0C7 (z) are the roots of the polynomial 1 - 9z + 32[z.sup.2] + 96[z.sup.3] - 399[z.sup.4] - 2323[z.sup.5] + 1676[z.sup.6] + 9274[z.sup.7] + 2233[z.sup.8] - 13025[z.sup.9] - 12426[z.sup.10] + 1194[z.sup.11] + 8849[z.sup.12] + 6817[z.sup.13] + 2694[z.sup.14] + 612[z.sup.15] + 76[z.sup.16] + 4[z.sup.17].

As all the strange fixed points are repulsive and, therefore, they lay on the Julia set, the position of the free critical points in the complex plane has no interest.

Zheng et al. in [36] presented the following eighth-order derivative-free family without memory:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)

which will be denoted by Z8.

Lemma 12. Operator [O.sub.Z8] (z) has ten simple strange fixed points, roots of the equation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Nine of these fixed points are repulsive for all values of parameter [beta]; however, one of them can be attractive (even super attractive) in a small complex region around the origin. In Figure 15, the stability function of these specific fixed point is shown.

Again, the existence of values of the parameter that yields attracting strange fixed points forces us to analyze the possibility of free critical points. As we know, [beta] both elements coexist, basins of attraction of fixed points different from the roots appear.

Lemma 13. Analyzing the equation [O'.sub.Z8] (z) = 0, we obtain the free critical points:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (18)

Moreover,

(i) if [beta] = 1/2, then [cr.sub.2](1/2) = [cr.sub.4](1/2) = -1 and there are only two free critical points;

(ii) also when [beta] = -1/2, then [cr.sub.1](1/2) = [cr.sub.3](1/2) = 1 and there are only two free critical points;

(iii) finally, if [beta] = -[square root of 2i] or [beta] = [square root of 2i], then [cr.sub.3] ([+ or -] [square root of 2i]) = [cr.sub.4] ([+ or -] [square root of 2i]) = [+ or -] [square root of 2i] and the number of free critical points is three.

Soleymani et al. in [37] proposed the following optimal three-step iteration family, including four function evaluations, just like (11) and (17):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (19)

where [a.sub.1] = f[[w.sub.n], [x.sub.n]] + f([w.sub.n])[a.sub.3] - ([w.sub.n] - [x.sub.n])[a.sub.2], [a.sub.2] = f[[w.sub.n], [x.sub.n], [y.sub.n]] + [a.sub.3]f[[w.sub.n], [y.sub.n]], and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)

Let us denote this method by S8; the rational function associated with S8 when it is applied on the quadratic polynomial p(z), [O.sub.S8] (z), is the same as the one of Z8. Then, their dynamics is the same (for this polynomial). Nevertheless, it is very different for other functions, as we will see in the following sections.

5. Attraction Basins for Various Steffensen-Type Methods

The aim herein is to use the basin of attraction as another tool for comparing the iteration algorithms given in Section 4.

We have used methods (10) with [beta] = 1, (10) with [beta] = -0.001, (11) with [beta] = 1, (11) with [beta] = -0.001, (13), (15), and (17) with [beta] = 1, (17) with [beta] = -0.001, (19) with [beta] = 1, and (19) with [beta] = -0.001, for the test problems of Section 4. The fractal behavior of these comparisons is furnished in Figures 16,17,18,19,20,21,22,23,24,25,26, and 27. Let us remark that the dynamics of Test 3 is similar to the other test functions, so it is not included.

Remark 14. According to the discussion at the end of Section 4, the Steffensen-type methods, in which there is no nonzero free parameter in their structures, are not competitive and we expect to have small basins of attraction for them.

It is clear for the compared tests that even the higher order Steffensen-type methods (13) and (15) without the free nonzero parameter have improved basins in contrast to Steffensen's scheme ([beta] = 1). Furthermore, it should be noted that among all the methods compared in this section, Soleymani et al. optimal eighth-order method (19) with [beta] = -0.001 has the best performance, followed by (17) with [beta] = -0.001. In this work, the computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz with 4 GB of RAM.

Remark 15. Unlike the Newton-type methods in which whatever the order is higher, the convergence radius is smaller, in the multipoint high-order efficient Steffensen-type methods the increase of convergence order will automatically improve the convergence radius, though the chaotic behavior of the schemes for unappropriate values of the free nonzero parameter is too much. To avoid this chaotic behavior, one may follow Remark 4.

In order to summarize these results, we have attached a weight to the quality of the results obtained by each method. The weight of 1 is for the smallest Julia set and a weight of 4 for scheme with chaotic behavior alongside the convergence behavior. We then averaged those results to come up with the smallest value for the best method overall and the highest for the worst. These data are presented in Table 1.

Notice again that the figures show how fast the method converges to a root based on shading to indicate speed of convergence.

6. Conclusions

In this paper, we have analyzed the dynamics of different Steffensen-type methods, firstly on quadratic polynomials and afterwards on other functions. We have concluded that in Steffensen-type methods whatever the order is higher, the convergence radius will be bigger. We have used Mathematica 8 for finding the fixed and critical points of the rational functions associated with the iterations functions and for drawing the basins of attraction. Besides, if the free nonzero parameter for the families analyzed tends to 0, then its fractal tends to be the same as Newton's fractal. Choosing very small magnitudes for the free nonzero parameter gives us the ability to avoid computation of the Jacobian matrix when dealing with systems of nonlinear equations and have an acceptable convergence radius. Although we have discussed simple zeros of nonlinear functions, such remarks are valid for Steffensen-type methods when finding multiple roots as well.

http://dx.doi.org/10.1155/2014/539707

Conflict of Interests

The authors declare that they do not have any conflict of interests in their submitted paper.

Acknowledgments

The authors are indebted to the referees for some interesting comments and suggestions. This research was supported by Ministerio de Ciencia y Tecnologla MTM2011-28636-C0202.

References

[1] J. F. Steffensen, "Remarks on iteration," Skandinavisk Aktuarietidskrift, vol. 16, pp. 64-72, 1933.

[2] F. Soleymani, "On a bi-parametric class of optimal eighthorder derivative-free methods," International Journal of Pure and Applied Mathematics, vol. 72, no. 1, pp. 27-37, 2011.

[3] F. Soleymani, "Optimal fourth-order iterative methods free from derivatives," Miskolc Mathematical Notes, vol. 12, no. 2, pp. 255-264, 2011.

[4] Q. Zheng, P. Zhao, L. Zhang, andW. Ma, "Variants of Steffensen-secant method and applications," Applied Mathematics and Computation, vol. 216, no. 12, pp. 3486-3496, 2010.

[5] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Series in Automatic Computation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964.

[6] A. M. Ostrowski, Solution of Equations and Systems of Equations, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964.

[7] B. Neta, M. Scott, and C. Chun, "Basins of attraction for several methods to find simple roots of nonlinear equations," Applied Mathematics and Computation, vol. 218, no. 21, pp. 10548-10556, 2012.

[8] B. Neta and M. Scott, "On a family of Halley-like methods to find simple roots of nonlinear equations," Applied Mathematics and Computation, vol. 219, no. 15, pp. 7940-7944, 2013.

[9] B. Neta and C. Chun, "On a family of Laguerre methods to find multiple roots of nonlinear equations," Applied Mathematics and Computation, vol. 219, no. 23, pp. 10987-11004, 2013.

[10] B. Neta, C. Chun, and M. Scott, "Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations," Applied Mathematics and Computation, vol. 227, pp. 567-592, 2014.

[11] S. Amat, S. Busquier, and S. Plaza, "Dynamics of the King and Jarratt iterations," Aequationes Mathematicae, vol. 69, no. 3, pp. 212-223, 2005.

[12] F. Chicharro, A. Cordero, J. M. Gutierrez, and J. R. Torregrosa, "Complex dynamics of derivative-free methods for nonlinear equations," Applied Mathematics and Computation, vol. 219, no. 12, pp. 7023-7035, 2013.

[13] A. Cordero, J. Garcia-Maimo, J. R. Torregrosa, M. P. Vassileva, and P. Vindel, "Chaos in King's iterative family," Applied Mathematics Letters, vol. 26, no. 8, pp. 842-848, 2013.

[14] C. Chun, M. Y. Lee, B. Neta, and J. Dzunic, "On optimal fourth-order iterative methods free from second derivative and their dynamics," Applied Mathematics and Computation, vol. 218, no. 11, pp. 6427-6438, 2012.

[15] A. Cordero, J. R. Torregrosa, and P. Vindel, "Dynamics of a family of Chebyshev-Halley type methods," Applied Mathematics and Computation, vol. 219, no. 16, pp. 8568-8583, 2013.

[16] F. Soleimani, F. Soleymani, and S. Shateyi, "Some iterative methods free from derivatives and their basins of attraction for nonlinear equations," Discrete Dynamics in Nature and Society, vol. 2013, Article ID 301718,10 pages, 2013.

[17] H. Susanto and N. Karjanto, "Newton's method's basins of attraction revisited," Applied Mathematics and Computation, vol. 215, no. 3, pp. 1084-1090, 2009.

[18] E. R. Vrscay and W. J. Gilbert, "Extraneous fixed points, basin boundaries and chaotic dynamics for Schroder and Konig rational iteration functions," Numerische Mathematik, vol. 52, no. 1, pp. 1-16, 1988.

[19] E. Ott, "Basin of attraction," Scholarpedia, vol. 1, no. 7, p. 1701, 2006.

[20] P. Blanchard, "Complex analytic dynamics on the Riemann sphere," Bulletin of the American Mathematical Society, vol. 11, no. 1, pp. 85-141, 1984.

[21] D. Gulick, Encounters with Chaos, Prentice-Hall, 1992.

[22] B. Kalantari, Polynomial Root-Finding and Polynomiography, World Scientific Publishing, Hackensack, NJ, USA, 2009.

[23] S. Wagon, Mathematica in Action, Springer, 3rd edition, 2010.

[24] http://www.wolfram.com/mathematica/new-in-8/new-andimproved-core-algorithms/index.html.

[25] C. Getz and J. Helmstedt, Graphics with Mathematica: Fractals, Julia Sets, Patterns and Natural Forms, Elsevier B. V., Amsterdam, The Netherlands, 2004.

[26] M. McClure, "Newton's method for complex polynomials," preprint version of a "Mathematical graphics" column from Mathematica in Education and Research, 1-15.

[27] J. L. Varona, "Graphic and numerical comparison between iterative methods," The Mathematical Intelligencer, vol. 24, no. 1, pp. 37-46, 2002.

[28] B. Ignatova, N. Kyurkchiev, and A. Iliev, "Multipoint algorithms arising from optimal in the sense of Kung-Traub iterative procedures for numerical solution of nonlinear equations," General Mathematics Notes, vol. 6, pp. 45-79, 2011.

[29] F. Soleymani and F. Soleimani, "Novel computational derivative-free methods for simple roots," Fixed Point Theory, vol. 13, no. 1, pp. 247-258, 2012.

[30] H. T. Kung and J. F. Traub, "Optimal order of one-point and multipoint iteration," Journal of the Association for Computing Machinery, vol. 21, pp. 643-651, 1974.

[31] C. McMullen, "Families of rational maps and iterative root-finding algorithms," Annals of Mathematics, vol. 125, no. 3, pp. 467-493, 1987.

[32] S. Smale, "On the efficiency of algorithms of analysis," Bulletin of the American Mathematical Society, vol. 13, no. 2, pp. 87-121, 1985.

[33] S. Amat, S. Busquier, and S. Plaza, "Review of some iterative root-finding methods from a dynamical point of view," Scientia. Series A. Mathematical Sciences, vol. 10, pp. 3-35, 2004.

[34] Z. Liu, Q. Zheng, and P. Zhao, "A variant of Steffensen's method of fourth-order convergence and its applications," Applied Mathematics and Computation, vol. 216, no. 7, pp. 1978-1983, 2010.

[35] A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa, "A family of derivative-free methods with high order of convergence and its application to nonsmooth equations," Abstract and Applied Analysis, vol. 2012, Article ID 836901, 15 pages, 2012.

[36] Q. Zheng, J. Li, and F. Huang, "An optimal Steffensen-type family for solving nonlinear equations," Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592-9597, 2011.

[37] F. Soleymani, S. Karimi Vanani, and M. J. Paghaleh, "A class of three-step derivative-free root solvers with optimal convergence order," Journal of Applied Mathematics, vol. 2012, Article ID 568740, 15 pages, 2012.

Alicia Cordero, (1) Fazlollah Soleymani, (2) Juan R. Torregrosa, (1) and Stanford Shateyi (3)

(1) Instituto de Matematica Multidisciplinar, Universitat Politecnica de Valencia, 46022 Valencia, Spain

(2) Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran

(3) Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, Private Bag X5050, Thohoyandou 0950, South Africa

Correspondence should be addressed to Stanford Shateyi; stanford.shateyi@univen.ac.za

Received 19 December 2013; Revised 31 January 2014; Accepted 31 January 2014; Published 12 March 2014

Academic Editor: Ioannis K. Argyros

TABLE 1: Results of chaotic comparisons for different derivative-free methods. Method Test 1 Test 2 Test 4 Test 5 Average (10) with [beta] = 1 3 4 3 3 13/4 (10) with [beta] = -0.001 2 3 2 2 9/4 (11) with [beta] = 1 3 4 3 3 13/4 (11) with [beta] = -0.001 2 3 2 2 9/4 (13) 4 4 4 3 15/4 (15) 4 4 4 3 15/4 (17) with [beta] = 1 3 3 3 3 12/4 (17) with [beta] = -0.001 2 2 1 2 7/4 (19) with [beta] = 1 2 2 3 3 10/4 (19) with [beta] = -0.001 1 1 1 1 4/4

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Cordero, Alicia; Soleymani, Fazlollah; Torregrosa, Juan R.; Shateyi, Stanford |

Publication: | Journal of Applied Mathematics |

Article Type: | Report |

Date: | Jan 1, 2014 |

Words: | 6010 |

Previous Article: | The burgers equation for a new continuum model with consideration of driver's forecast effect. |

Next Article: | Garage location selection for public transportation system in Istanbul: an integrated fuzzy AHP and fuzzy axiomatic design based approach. |

Topics: |