Printer Friendly

Fixed Point and Newton's Methods in the Complex Plane.

1. Introduction

In this paper, we revisit fixed point and Newton's methods to find a simple solution of a nonlinear equation in the complex plane. This paper is an adapted version of [1] for complex valued functions. We present only proofs of theorems we have to modify compared to the real case. We present sufficient and necessary conditions for the convergence of fixed point and Newton's methods. Based on these conditions we show how to obtain direct processes to recursively increase the order of convergence. For the fixed point method, we present a generalization of Schroder's method of the first kind. Two methods are also presented to increase the order of convergence of the Newton's method. One of them coincide with the Schroder's process of the second kind which has several forms in the literature. The link between the two Schroder's processes can be found in [2]. As for the real case, we can combine methods to obtain, for example, the super-Halley process of order 3 and other possible higher order generalizations of this process. We refer to [1] for details about this subject.

The plan of the paper is as follows. In Section 2, we recall Taylor's expansions for analytic functions and the error term for truncated expansions. In Section 3 we consider the fixed point method and its necessary and sufficient conditions for convergence. These results lead to a generalization of the Schroder's process of the first kind. Section 4 is devoted to Newton's method. Based on the necessary and sufficient conditions, we propose two ways to increase the order of convergence of the Newton's method. Examples and numerical experiments are included in Section 5.

2. Analytic Function

Since we are working with complex numbers, we will be dealing with analytic functions. Supposing g(z) is an analytic function and a is in its domain, we can write

[g.sup.(k)](z) = [[infinity].summation over (j=0)] [.sup.(k+j)]([alpha])/j! [(z - [alpha]).sup.j], (1)

for any k = 0,1, .... Then, for q = 1,2, ... we have

[mathematical expression not reproducible], (2)

where [mathematical expression not reproducible] is the analytic function:

[mathematical expression not reproducible]. (3)

Moreover, the series for [g.sup.(k)](z) and [mathematical expression not reproducible] have the same radius of convergence for any k, and

[mathematical expression not reproducible] (4)

for j = 0,1,2, ....

3. Fixed Point Method

A fixed point method use an iteration function (IF) which is an analytic function mapping its domain of definition into itself. Using an IF [PHI](z) and an initial value [z.sub.0], we are interested by the convergence of the sequence [{[z.sub.k+1] = [PHI]([z.sub.k]}.sup.+[infinity].sub.k=0]. It is well known that if the sequence [{[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] converges, it converges to a fixed point of [PHI](z).

Let [PHI](z) be an IF, p be a positive integer, and {[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] be such that the following limit exists:

[mathematical expression not reproducible]. (5)

Let us observe that for [p.sub.1] < p < [p.sub.2] we have

[mathematical expression not reproducible]. (6)

We say that the convergence of the sequence to [alpha] is of (integer) order p if and only if [K.sub.p]([alpha];[PHI]) [not equal to] 0, and [K.sub.p]([alpha];[PHI]) is called the asymptotic constant. We also say that [PHI](z) is of order p. If the limit [K.sub.p]([alpha];[PHI]) exists but is zero, we can say that [PHI](z) is of order at least p.

From a numerical point of view, since a is not known, it is useful to define the ratio:

[[??].sub.p]([alpha],k) = [z.sub.k+1] - [z.sub.k+2]/[([z.sub.k] - [z.sub.k+1]).sup.p]. (7)

Following [3], it can be shown that

[mathematical expression not reproducible]. (8)

We say that a is a root of f(z) of multiplicity q if and only if [f.sup.(j)]([alpha]) = 0 for j = 0, ...,q-1, and [f.sup.(q)]([alpha]) [not equal to] 0. Moreover, [alpha] is a root of f(z) of multiplicity q if and only if there exists an analytic function [w.sub.f,q](z) such that [w.sub.f,q]([alpha]) = [f.sup.(q)]([alpha])/q! [not equal to] 0 and f(z) = [w.sub.f,q](z)[(z-[alpha]).sup.q].

We will use the big O notation g(z) = O(f(z)) and the small o notation g(z) = o(f(z)), around z = [alpha], respectively, when c [not equal to] 0 and c = 0, when

[mathematical expression not reproducible]. (9)

For [alpha] a root of multiplicity q of f(z), it is equivalent to write g(z) = O(f(z)) or g(z) = O([(z-[alpha]).sup.q]). Observe also that if [alpha] is a simple root of f(z), then [alpha] is a root of multiplicity q of [f.sup.q](z). Hence g(z) = O([f.sup.q](z)) is equivalent to g(z) = 0([(z-[alpha]).sup.q]).

The first result concerns the necessary and sufficient conditions for achieving linear convergence.

Theorem 1. Let [PHI](z) be an IF, and let [[PHI].sup.(1)](z) stand for its first derivative. Observe that although the first derivative is usually denoted by [PHI]'(z), one will write [[PHI].sup.(1)](z) to maintain uniformity throughout the text.

(i) If [absolute value of ([[PHI].sup.(1)]([alpha]))] < 1, then there exists a neighborhood of [alpha] such that for any [z.sub.0] in that neighborhood the sequence {[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] converges to [alpha].

(ii) If there exists a neighborhood of a such that for any [z.sub.0] in that neighborhood the sequence {[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] converges to [alpha], and [z.sub.k] [not equal to] [alpha] for all k, then [absolute value of ([[PHI].sup.(1)]([alpha]))] [less than or equal to] 1.

(iii) For any sequence [{[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] which converges to [alpha], the limit [K.sub.1]([alpha];[PHI]) exists and [K.sub.1]([alpha];[PHI]) = [[PHI].sup.(1)]([alpha]).

Proof. (i) By continuity, there is a disk [D.sub.[rho]]([alpha]) = {[alpha] [member of] C[parallel]z - [alpha]| < [rho]} such that [absolute value of ([w.sub.[PHI],1](z))] [less than or equal to] (1 + [absolute value of ([[PHI].sup.(1)]([alpha]))])|2 = L < 1. Then if [z.sub.k] [member of] [D.sub.[rho]]([alpha]), we have

[mathematical expression not reproducible], (10)

and [z.sub.k+1] [member of] [D.sub.[rho]]([alpha]). Moreover

[absolute value of ([z.sub.k] - [alpha])] [less than or equal to] [L.sup.k] [absolute value of ([z.sub.0] - [alpha])], (11)

and the sequence [{[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] converges to [alpha] because 0 [less than or equal to] L [less than or equal to] 1.

(ii) If [absolute value of ([[PHI].sup.(1)]([alpha]))] > 1, there exists a disk [D.sub.[rho]]([alpha]), with [rho] > 0, such that [absolute value of ([w.sub.[PHI],1](z))] > (1 + [absolute value of (([PHI].sup.(1)]([alpha]))])/2 = L > 1. Let us suppose that the sequence [{z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] is such that [z.sub.k] [not equal to] [alpha] for all k. If [z.sub.k] and [z.sub.k+1] [member of] [D.sub.[rho]]([alpha]), then we have

[mathematical expression not reproducible]. (12)

Let 0 < [member of] < [rho], and suppose [z.sub.k], [z.sub.k+1], ..., [z.sub.k+l] are in [D.sub.[epsilon]]([alpha]) c [D.sub.[rho]]([alpha]). Because

[absolute value of ([z.sub.k+l] - [alpha])] [greater than or equal to] [L.sup.l] [absolute value of ([z.sub.k] - [alpha])] (13)

eventually [L.sup.l+1] [absolute value of ([z.sub.k] - [alpha])][greater than or equal to] [member of] and [z.sub.k+l] [not member of] [D.sub.[epsilon]]([alpha]). Then the infinite sequence cannot converge to [alpha].

(iii) For any sequence [{[z.sub.k+1] = [PHI]([z.sub.k])}.sup.+[infinity].sub.k=0] which converges to [alpha] we have

[mathematical expression not reproducible]. (14)

For higher order convergence we have the following result about necessary and sufficient conditions.

Theorem 2. Let p be an integer [greater than or equal to] 2 and let [PHI](z) be an analytic function such that [PHI]([alpha]) = [alpha]. The IF [PHI](z) is of order p if and only if [[PHI].sup.(j)]([alpha]) = 0 for j = 1, ..., p- 1, and [[PHI].sup.(p)]([alpha]) [not equal to] 0. Moreover, the asymptotic constant is given by

[mathematical expression not reproducible]. (15)

Proof. (i) The (local) convergence is given by part (i) of Theorem 1. Moreover we have

[z.sub.k+1] - [alpha] = [PHI]([z.sub.k]) - [PHI]([alpha]) = [w.sub.[PHI],p]([z.sub.k]) [([z.sub.k] - [alpha]).sup.p], (16)

and hence

[mathematical expression not reproducible]. (17)

(ii) If the IF is of order p [greater than or equal to] 2, assume that [[PHI].sup.(j)]([alpha]) = 0 for j = 1,2, ..., l - 1 with l < p. We have

[z.sub.k+1] = [alpha] = [PHI] ([z.sub.k]) - [PHI]([alpha]) = [w.sub.[PHI],l][([z.sub.k - [alpha]).sup.l], (18)

where

[mathematical expression not reproducible]. (19)

But

[w.sub.[PHI],l]([z.sub.k]) = [z.sub.k+1] - [alpha]/[([z.sub.k] - [alpha]).sup.l] = [z.sub.k+1] - [alpha]/[([z.sub.k] - [alpha]).sup.p][([z.sub.k] - [alpha]).sup.p-l], (20)

and hence

[mathematical expression not reproducible]. (21)

So [[PHI].sup.(l)]([alpha]) = 0.

It follows that, for an analytic IF and p > 2, the limit [K.sub.p]([alpha]; [PHI]) exists if and only if [K.sub.l]([alpha]; [PHI]) = 0 for l = 1, ..., p-1.

As a consequence, for an analytic IF -(z) we can say that (a) [PHI](z) is of order p if and only if [PHI](z) = [alpha] + O([(z - [alpha]).sup.p]), or, equivalently, if [PHI]([alpha]) = [alpha] and [[PHI].sup.(1)](z) = O([(z - [alpha]).sup.p-1]), and (b) if [alpha] is a simple root of f(z), then [PHI](z) is of order p if and only if [PHI](z) = [alpha] + O([f.sup.p](z)), or, equivalently, if [PHI]([alpha]) = [alpha] and [[PHI].sup.(1)](z) = O([f.sup.p-1](z)).

Schroder's process of the first kind is a systematic and recursive way to construct an IF of arbitrary order p to find a simple zero a of f(z). The IF has to fulfill at least the sufficient condition of Theorem 2. Let us present a generalization of this process.

Theorem 3 (see [1]). Let [alpha] be a simple root of f(z), and let [c.sub.0](z) be an analytic function such that [c.sub.0]([alpha]) = [alpha]. Let [[PHI].sub.p](z) be the IF defined by the finite series:

[[PHI].sub.p](z) = [[p-1].summation over (l=0)] [c.sub.l](z) [f.sup.l](z), (22)

where [c.sub.l](z) are such that

[c.sub.l](z) = -1/l (1/[f.sup.(1)](z) d/dz) [c.sub.l-1](z) (23)

for l = 1,2, ... Then [[PHI].sub.p](z) is of order p, and its asymptotic constant is

[mathematical expression not reproducible]. (24)

For [c.sub.0](z) = z in (22), we recover the Schroder's process of the first kind of order p [4-7], which is also associated with Chebyshev and Euler [8-10]. The first term [c.sub.0](z) could be seen as a preconditioning to decrease the asymptotic constant of the method, but its choice is not obvious.

4. Newton's Iteration Function

Considering [c.sub.0](z) = z and p = 2 in (22), we obtain

[[PHI].sub.2](z) = z - f(z)/[f.sup.(1)](z) (25)

which is Newton's IF of order 2 to solve f(z) = 0. The sufficiency and the necessity of the condition for high-order convergence of the Newtons method are presented in the next result.

Theorem 4. Let p [greater than or equal to] 2 and let [PSI](z) be an analytic function such that [PSI]([alpha]) = 0 and [[PSI].sup.(1)]([alpha]) [not equal to] 0. The Newton iteration [N.sub.[psi]](z) = z-[PSI](z)/[[PSI].sup.(1)](z) is of order p if and only if [[PSI].sup.(j)]([alpha]) = 0 for j = 2, ..., p - 1, and [[PSI].sup.(p)]([alpha]) [not equal to] 0. Moreover, the asymptotic constant is

[K.sub.p]([alpha];[N.sub.[PSI]]) = P - 1/p! [[PSI].sup.(p)]([alpha])/[[PSI].sup.(1)]([alpha]). (26)

Proof. (i) If [[PSI].sup.(j)]([alpha]) = 0 for j = 2, ..., p - 1, and [[PSI].sup.(p)]([alpha]) [not equal to] 0 we have

[mathematical expression not reproducible]. (27)

But

[mathematical expression not reproducible]. (28)

It follows that

[mathematical expression not reproducible], (29)

so

[mathematical expression not reproducible]. (30)

(ii) Conversely, if [N.sub.[PSI]](z) is of order p we have [N.sup.(j).sub.[PSI]]([alpha]) = 0 for j = 1, ..., p - 1, and [N.sup.(p).sub.[PSI]](alpha]) [not equal to] 0. Hence [alpha] is a root of multiplicity p - 1 of [N.sup.(1).sub.[PSI]](z) and we can write

[mathematical expression not reproducible]. (31)

We also have

[PSI](z) = [w.sub.[PSI],1](z)(z - [alpha]). (32)

But

[N.sup.(1).sub.[PSI]](z) = [PSI](z)[[PSI].sup.(2)](z)/[[[PSI].sup.(1)](z)].sup.2], (33)

so we obtain

[mathematical expression not reproducible], (34)

where

[mathematical expression not reproducible]. (35)

It follows that [alpha] is a root of multiplicity p - 2 of [[PSI].sup.(2)](z). Hence [[[PSI].sup.(j)]([alpha]) = 0 for j = 2, ..., p - 1, and [[PSI].sup.(p)]([alpha]) [not equal to] 0.

We can look for a recursive method to construct a function [[PSI].sub.p](z) which will satisfy the conditions of Theorem 4. A consequence will be that [mathematical expression not reproducible] will be of order p, and [mathematical expression not reproducible]. A first method has been presented in [11,12]. The technique can also be based on Taylor's expansion as indicated in [13].

Theorem 5 (see [11]). Let /(z) be analytic such that f([alpha]) = 0 and [f.sup.(1)]([alpha]) [not equal to] 0. If [F.sub.p](z) is defined by

[mathematical expression not reproducible], (36)

then [F.sub.p]([alpha]) = 0, [F.sup.(1).sub.p]([alpha]) [not equal to] 0, [F.sup.(l).sub.p]([alpha]) = 0 for 1 = 2,..., p - 1. It follows that [mathematical expression not reproducible] is of order at least p.

Let us observe that in this theorem it seems that the method depends on a choice of a branch for the (p - 1)th root function. In fact the Newton iterative function does not depend on this choice because we have

[mathematical expression not reproducible]. (37)

In fact the next theorem shows that a branch for the (p - 1)th root function is not necessary.

Theorem 6 (see [12]). Let [F.sub.p](z) be given by (36); one can also write

[mathematical expression not reproducible], (38)

where

[mathematical expression not reproducible]. (39)

Unfortunately, there exist no general formulae for [mathematical expression not reproducible] and its asymptotic constant [mathematical expression not reproducible] exists. However, the asymptotic constant can be numerically estimated with (7).

A second method to construct a function [[PSI].sub.p](z) which will satisfy the conditions of Theorem 4 is given in the next theorem.

Theorem7 (see [1]). Let a be a simple root of f(z). Let [[PSI].sub.p](z) be defined by

[[PSI].sub.p](z) = [[p-1].summation over (l=0)][d.sub.l](z)[f.sup.l](z), (40)

where [d.sub.0](z) and [d.sub.1](z) are two analytic functions such that

[d.sub.0]([alpha]) = 0 [d.sup.(1).sub.0]([alpha]) + [d.sub.1]([alpha])[f.sup.(1)]([alpha]) [not equal to] 0, (41)

[mathematical expression not reproducible] (42)

for l = 2,3, .... Then

[mathematical expression not reproducible] (43)

is of order p, with

[mathematical expression not reproducible]. (44)

Let us observe that if we set [[PSI].sub.p](z) = [[PHI].sub.p](z) - z with [[PHI].sub.p](z) given by (22), then [[PSI].sub.p](z) verifies the assumptions of Theorem 7.

Remark 8. For a given pair of [d.sub.0](z) and [d.sub.1](z) in Theorem 7, the linearity of expression (42) with respect to [d.sub.0](z) and [d.sub.1](z) for computing [d.sub.l](z)'s allows us to decompose the computation for [[PSI].sub.p](z) in two computations, one for the pair [d.sub.0](z) and [d.sub.1](z) = 0 and the other for the pair [d.sub.0](z) = 0 and [d.sub.1](z), and then add the two [[PSI].sub.p](z)'s hence obtained.

5. Examples

Let us consider the problem of finding the 3rd roots of unity:

[[alpha].sub.k] = [e.sup.2(k-1)[pi]i/3] for k = 0,1,2, (45)

for which we have [[alpha].sup.3] = 1. Hence we would like to solve

f(z) = 0, (46)

for

f(z) = [z.sup.3] - 1. (47)

As examples of the preceding results, we present methods of orders 2 and 3 obtained from Theorems 3, 5, and 7. For each method, we consider also presenting the basins of attraction of the roots.

The drawing process for the basins of attraction follows Varona [14]. Typically for the upcoming figures, in squares [[2.5,2.5].sup.2], we assign a color to each attraction basin of each root. That is, we color a point depending on whether within a fixed number of iteration (here 25) we lie with a certain precision (here [10.sup.-3]) of a given root. If after 25 iterations we do not lie within [10.sup.-3] of any given root we assign to the point a very dark shade of purple. The more there are dark shades of purple, the more the points have failed to achieve the required precision within the predetermined number of iteration.

5.1. Examples for Theorem 3. We start with iterative methods of order 2. From Theorem 3, we first want [c.sub.0]([alpha]) = [alpha]. We observe that the simplest such function is [c.sub.0](z) = z. Such a choice has the advantage that derivative of higher order than 2 of this function [c.sub.0](z) will be 0, thus simplifying further computation. This is in fact the choice of function [c.sub.0](z) which leads to Newton's method and Chebyshev family of iterative methods. We observe however that it is generally possible to consider different choices of functions, although most might be numerically convenient as we will illustrate here. We need [c.sub.0]([alpha]) = [alpha], in such we can also look at [c.sub.0](z) = za(z) where a([alpha]) = 1. In the examples that follow we will look at such functions a(z).

In Table 1, we have considered 3 functions of this kind. We have developed explicit expressions for f(z) = [z.sup.3] -1. Figure 1 presents different graphs for the basins of attraction for these methods. We observe that some of them have a lot of purple points.

Now let us consider method of order 3 with [c.sub.0](z) = [z.sup.3m+1] with (m [member of] Z). In this case we obtain

[mathematical expression not reproducible], (48)

and its asymptotic constant is

[K.sub.3]([alpha];[[PHI].sub.3]) = (3m + 1)(3m - 2)(3m - 5)/6 [alpha]. (49)

Examples of basins of attraction are given in Figure 2 for m = 0,1,2. The smallest asymptotic constant is for m = 1.

5.2. Examples for Theorem 5. Gerlach's process described in Theorems 5 and 6 leads to Newton's method for p = 2 and Halley's method for p = 3. For our problem we have

[mathematical expression not reproducible]. (50)

These methods are well known standard methods. For comparison, their basins of attraction are given in Figure 3.

5.3. Examples for Theorem 7. To illustrate Theorem 7, we set [d.sub.0](z) = 0 and [d.sub.1](z) = [z.sup.k] for k [member of] Z, and let us consider methods of orders 2 and 3 to solve [z.sup.3] - l =0. Table 2 presents the quantities [mathematical expression not reproducible] for p = 2,3 for this example.

We observe that the asymptotic constant of the method of order 2 for k = -l is zero; it means that this method is of an order of convergence higher than 2, and in fact it corresponds to Halley's method which is of order 3. We observe that methods of order 3 for the values of k = -1 and k = 2 both correspond to Halley's method for our specific problem. Examples of basins of attraction are given in Figure 4 for methods of order 2 and in Figure 5 for methods of order 3 using values of k = -2, -1,0, 1,2,3.

6. Concluding Remarks

In this paper we have presented fixed point and Newton's methods to compute a simple root of a nonlinear analytic function in the complex plane. We have pointed out that the usual sufficient conditions for convergence are also necessary. Based on these conditions for high-order convergence, we have revisited and extended both Schroder's methods of the first and second kind. Numerical examples are given to illustrate the basins of attraction when we compute the third roots of unity. It might be interesting to study the relationship, if there is any between the asymptotic constant and the basin of attraction for such methods.

https://doi.org/10.1155/2018/7289092

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been financially supported by an individual discovery grant from NSERC (Natural Sciences and Engineering Research Council of Canada) and a grant from ISM (Institut des Sciences Mathematiques).

References

[1] F. Dubeau and C. Gnang, "Fixed point and Newton's methods for solving a nonlinear equation: from linear to high-order convergence," SIAM Review, vol. 56, no. 4, pp. 691-708, 2014.

[2] F. Dubeau, "Polynomial and rational approximations and the link between Schroder's processes of the first and second kind," Abstract and Applied Analysis, vol. 2014, Article ID 719846, 5 pages, 2014.

[3] F. Dubeau, "On comparisons of chebyshev-halley iteration functions based on their asymptotic constants," International Journal of Pure and Applied Mathematics, vol. 85, no. 5, pp. 965-981, 2013.

[4] E. Schroder, "Ueber unendlich viele algorithmen zur auflosung der gleichungen," Mathematische Annalen, vol. 2, no. 2, pp. 317-365, 1870.

[5] E. Schroder, "On Infinitely Many Algorithms for Solving Equations," in Institute for advanced Computer Studies, G. W. Stewart, Ed., pp. 92-121, University of Maryland, 1992.

[6] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, NJ, USA, Englewood Cliffs, 1964.

[7] A. S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill Book Co., NY, USA, 1970.

[8] E. Bodewig, "On types of convergence and on the behavior of approximations in the neighborhood of a multiple root of an equation," Quarterly of Applied Mathematics, vol. 7, pp. 325-333, 1949.

[9] M. Shub and S. Smale, "Computational complexity. On the geometry of polynomials and a theory of cost," Annales Scientifiques de l'Ecole Normale Superieure. Quatrieme Serie, vol. 18, no. 1, pp. 107-142, 1985.

[10] M. Petkovic and D. Herceg, "On rediscovered iteration methods for solving equations," Journal of Computational and Applied Mathematics, vol. 107, no. 2, pp. 275-284, 1999.

[11] J. Gerlach, "Accelerated convergence in Newton's method," SIAM Review, vol. 36, no. 2, pp. 272-276, 1994.

[12] W. F. Ford and J. A. Pennline, "Accelerated convergence in Newton's method," SIAM Review, vol. 38, no. 4, pp. 658-659, 1996.

[13] F. Dubeau, "On the modified Newton's method for multiple root," Journal of Mathematical Analysis, vol. 4, no. 2, pp. 9-15, 2013.

[14] J. L. Varona, "Graphic and numerical comparison between iterative methods," The Mathematical Intelligencer, vol. 24, no. 1, pp. 37-46, 2002.

Francois Dubeau (iD) and Calvin Gnang (iD)

Departement de Mathematiques, Faculte des Sciences, Universite de Sherbrooke, 2500 Boul. de I'Universite, Sherbrooke, QC, Canada J1K 2R1

Correspondence should be addressed to Calvin Gnang; calvin.gnang@usherbrooke.ca

Received 29 August 2017; Accepted 12 December 2017; Published 29 January 2018

Academic Editor: Daniel Girela

Caption: Figure 1: Basins of attraction for methods of order 2 of Table 1.

Caption: Figure 2: Methods of order 3 for computing the cubic root with [c.sub.0](z) = [z.sup.3m+1] for M = 0,1,2.

Caption: Figure 3: First two methods for computing the third root with Theorem 5.

Caption: Figure 4: Methods of order 2 to illustrate Theorem 7.

Caption: Figure 5: Methods of order 3 to illustrate Theorem 7.
Table 1: Method of order 2 based on Theorem 3.

[c.sub.0](z) = za(z)   [[PHI].sub.2](z)

z [z.sup.3m]           [[(3m + 1) - (3m - 2)[z.sup.3]
(m [member of] Z)             [z.sup.3m-2]/3

z exp([z.sup.3] - 1    1 + 5[z.sup.3] - 3[z.sup.6]/3[z.sup.2]
                              exp ([z.sup.3] - 1)

z cos([z.sup.3] - 1)   2[z.sup.3] - 1)/3[z.sup.2] cos([z.sup.3] - 1)
                          + z([z.sup.3] - 1) sin([z.sup.3] - 1)

[c.sub.0](z) = za(z)   [K.sub.2]([alpha]; [[PHI].sub.2])

z [z.sup.3m]           -(3m + 1)(3m - 2)/2 [[alpha].sup.2]
(m [member of] Z)

z exp([z.sup.3] - 1)   -13/2 [[alpha].sup.2]

z cos([z.sup.3] - 1)   11/2 [[alpha].sup.2]

Table 2: Method of orders 2 and 3 based on Theorem 3.

                               [mathematical expression
p   [[PSI].sub.p](z)              not reproducible]

2   [z.sup.k]([z.sup.3] - 1)   z(k + 2)[z.sup.3] - (k - 1)/(k + 3)
                                         [z.sup.3] - k

3   [z.sup.k]/3 [(2 - k)        z[(k + 2)(2 - k)[z.sup.3] + (k - 1)
    [z.sup.3] + (2k - 1)       (2k - 1) - (k - 4)(k + 1)[z.sup.-3]]/
    - (k + 1)[z.sup.-3]]        [(k + 3)(2 - k)[z.sup.3] + k (2k - 1)
                                    - (k - 3)(k + 1)[z.sup.-3]]

p   [[PSI].sub.p](z)           [d.sub.p](z)

2   [z.sup.k]([z.sup.3] - 1)   -k + 1/3 [z.sup.k-3]

3   [z.sup.k]/3 [(2 - k)       3[k.sup.2] - 3k - 8/54[z.sup.2]
    [z.sup.3] + (2k - 1)                 [z.sup.k-4]
    - (k + 1)[z.sup.-3]]

p   [[PSI].sub.p](z)           [K.sub.p]([alpha]; [[PSI].sub.p])

2   [z.sup.k]([z.sup.3] - 1)   (k + 1)[[alpha].sup.2]

3   [z.sup.k]/3 [(2 - k)       -3[k.sup.2] - 3k - 8/3 [alpha]
    [z.sup.3] + (2k - 1)
    - (k + 1)[z.sup.-3]]
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Dubeau, Francois; Gnang, Calvin
Publication:Journal of Complex Analysis
Date:Jan 1, 2018
Words:4551
Previous Article:The Application of Real Convolution for Analytically Evaluating Fermi-Dirac-Type and Bose-Einstein-Type Integrals.
Next Article:Generalized Distribution and Its Geometric Properties Associated with Univalent Functions.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |