Printer Friendly

A new iterative method for finding approximate inverses of complex matrices.

1. Introduction

The solution of linear algebraic systems of the form Ax = b, where A = [[a.sub.i,j]] is an m x m matrix (all matrices in this paper are of the same dimension, unless it is clearly stated) and b is a given vector, is the center to many numerical simulations and is often the most time-consuming part of a computation. Direct methods, which work essentially based on finding the inverse of the coefficients matrix A, are very robust, and they tend to require a predictable amount of resources in terms of time and storage. Their only problem, in fact, lies in the massive need of time and memory in calculations, which normally put them out of interest especially for the cases when A is sparse.

At this time, iterative methods can be taken into account. The relaxation and Krylov subspace methods are of such type, which are reliable in solving large scale sparse systems [1]. On the other hand, we have another type of iterative methods, which are called Schulz-type iterations in the literature (see, e.g., [2]). These techniques are based on finding robust approximate inverses of a given matrix.

The oldest technique of this type is the Schulz method [3] defined by

[V.sub.n+1] = [V.sub.n](2I - A[V.sub.n]), n = 0, 1, 2, ..., (1)

where I is the identity matrix. In the general case, it is known to converge with [V.sub.0] := [alpha][A.sup.*], where 0 < [alpha] < 2/[rho]([A.sup.*] A) and [rho](x) denotes the spectral radius. Such schemes are also useful for sensitivity analysis when accurate approximate inverses are needed for both square and rectangular matrices. Notice that, for rectangular matrices, one may obtain their generalized inverse using such iterative methods [4].

These solvers are also the method of choice in certain areas such as circuits, power system networks, and chemical plant modeling [5]. A practical application of Schulz-type methods, which recently attracted some numerical analysts to itself [6], is in preconditioning. In fact, by having an initial value [7], one is able to produce any approximate inverse pre-conditioners up to the desired accuracy and then solve the preconditioned linear system as rapidly as possible.

Hence, we are interested in finding a new iterative method belonging to the class of Schulz-type methods for finding approximate inverses in this work.

Remark 1. We use matrix norm 2 in all subsequent derivations and discussions unless the other forms are clearly stated.

The rest of this paper is organized in what follows. In the next section, we briefly review some of the existing Schulz-type methods and provide a new mathematical proof for one of the unproved iterative methods. Another contribution of this paper is presented in Section 3. That section is also devoted to the analysis of convergence. Section 4 covers the numerical simulations and some notes in order to have the best feedback in practical implementations. And finally, concluding remarks are drawn in Section 5.

2. Preliminaries

Li et al. in [6] presented

[V.sub.n+1] = [V.sub.n](3I - A[V.sub.n](3I - A[V.sub.n])), n = 0, 1, 2, ..., (2)

and also proposed another third-order iterative method for finding [A.sup.-1] as comes next (right-product form):

[V.sub.n+1] = [I + [1/4](I - [V.sub.n]A) [(3I - [V.sub.n]A).sup.2]] [V.sub.n], n = 0, 1, 2, .... (3)

W. Li and Z. Li in [8] proposed the following fourth-order iteration method:

[V.sub.n+1] = [V.sub.n](4I - 6A[V.sub.n] + 4[(A[V.sub.n]).sup.2] - [(A[V.sub.n]).sup.3]), n = 0, 1, 2, .... (4)

In fact, a family of methods developed in [8]. We draw attention to the point that the iterative methods (2) and (4) can also be found in the textbook [9]. Moreover, we suggest that the iterative method (3) can be rewritten as (left-product form)

[V.sub.n+1] = [1/4] [V.sub.n](13I - A[V.sub.n] (15I - A[V.sub.n](7I - A[V.sub.n]))). (5)

This structure is easier due to the use of Horner-like multiplications and, subsequently, lowers round-off errors by avoiding the calculation of matrix power, which is costly. It should be remarked that authors in [6] did not provide a mathematical proof for (3) or its other form (5). Thus, herein we prove its order of convergence to first complete the paper [6].

Theorem 2. Assume that A = [[[a.sub.i,j]].sub.mxm] is an invertible matrix with real or complex components. If the initial value [V.sub.0] satisfies

[parallel]I - A[V.sub.0][parallel] < 1, (6)

then, the iteration (5) converges cubically to [A.sup.-1].

Proof. In order to prove the convergence of (5), we consider first that [parallel]I - A[V.sub.0][parallel] < 1, [E.sub.0] = I - A[V.sub.0], and [E.sub.n] = I - A[V.sub.n]. For (5), we get that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

Thus, we obtain [parallel][E.sub.n+1][parallel] = (1/4)([parallel]3[E.sup.3.sub.n] + [E.sup.4.sub.n][parallel]) [less than or equal to] (1/4)(3[parallel]3[E.sup.3.sub.n] + [E.sup.4.sub.n][parallel]). Moreover, since [parallel][E.sub.0][parallel] < 1 and [parallel][E.sub.1][parallel] [less than or equal to] [[parallel][E.sub.0][parallel].sup.3] < 1, we get that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (8)

where (8) tends to zero when n [right arrow] [infinity]. That is, I - A[V.sub.n] [right arrow] 0, when n [right arrow] [infinity], and thus for (5), we attain [V.sub.n] [right arrow] [A.sup.-1], as n [right arrow] [infinity]. This shows the convergence.

Now, we must show its third order. Toward this end, we denote by [[epsilon].sub.n] = [A.sup.-1] - [V.sub.n] the error matrix in the iterative procedure (5). We have (using a similar methodology as in [10])

I - A[V.sub.n+1] = [1/4] [3[(1 - A[V.sub.n]).sup.3] + [(I - A[V.sub.n]).sup.4]]. (9)

We can now obtain

A([A.sup.-1] - [V.sub.n+1]) = [1/4][3[(A([A.sup.-1] - [V.sub.n])).sup.3] + [(A([A.sup.-1] - [V.sub.n])).sup.4], (10)

A[[epsilon].sub.n+1] = [1/4] [3[(A[[epsilon].sub.n]).sup.3] + [(A[[epsilon].sub.n]).sup.4]]. (11)

Equation (11) results in

A[[epsilon].sub.n+1] = [1/4] [3(A[[epsilon].sub.n]) [(A[[epsilon].sub.n]).sup.2] + (A[[epsilon].sub.n]) [(A[[epsilon].sub.n]).sup.3]], (12)

and subsequently

[[epsilon].sub.n+1] = [1/4] [3[[epsilon].sub.n][(A[[epsilon].sub.n]).sup.2] + [[epsilon].sub.n][(A[[epsilon].sub.n]).sup.3]], (13)

which yields by taking norm from both sides

[parallel][[epsilon].sub.n+1][parallel] [less than or equal to] [1/4] [3 [parallel][[epsilon].sub.n][parallel] [[parallel]A[[epsilon].sub.n][parallel].sup.2] + [parallel][[epsilon].sub.n][parallel] [[parallel]A[[epsilon].sub.n][parallel].sup.3]], (14)

and consequently

[parallel][[epsilon].sub.n+1][parallel] [less than or equal to] ([1/4] [3 [parallel][[epsilon].sub.n][parallel] [[parallel]A[parallel].sup.2] + [[parallel]A[parallel].sup.3] [[parallel][[epsilon].sub.n][parallel]]]) [[parallel][[epsilon].sub.n][parallel].sup.3]. (15)

This reveals that the iterative method (5) converges to [A.sup.-1] with at least third order of convergence. The proof is complete.

3. A Novel Method

Let 7 be the identity matrix and n = 0, 1, 2, .... We aim at constructing an iterative method in which the sequence of iterates [{[V.sub.n]}.sup.n=[infinity].sub.n=0] converges to [A.sup.-1] for an appropriate initial guess. We suggest our proposed method as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (6)

while [[psi].sub.n] = A[V.sub.n] and its derivation will be pointed out in Section 4. In numerical mathematics, it is essential to know the theoretical behavior of an approximate method. In what follows, we prove the convergence order of (16).

Theorem 3. Assume that A = [[[a.sub.i,j]].sub.mxm] is an invertible matrix with real or complex entries. If the initial guess [V.sub.0] satisfies

[parallel]I - A[V.sub.0][parallel] < 1, (17)

then, the iteration (16) converges to [A.sup.-1] with at least tenth convergence order.

Proof. In order to prove the convergence of (16), we consider the same assumptions as we did in the proof of Theorem 2. We then have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (18)

Thus, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (19

Moreover, since [parallel][E.sub.0][parallel] < 1 and [parallel][E.sub.1][parallel] [less than or equal to] [[parallel][E.sub.0][parallel].sup.10] < 1, we attain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (20)

where (20) tends to zero when n [right arrow] [infinity]. That is, I - A[V.sub.n] [right arrow] 0, when n [right arrow] [infinity], and thus for (16), we obtain

[V.sub.n] [infinity] [A.sup.-1], as n [right arrow] [infinity]. (21)

We must now illustrate the tenth order of convergence for (16). To this aim, we denote by [[epsilon].sub.n] = [A.sup.-1] - [V.sub.n] the error matrix in the iterative procedure (16). We have (using (18))

I - A[V.sub.n+1] = [1/4] [[(I - A[V.sub.n]).sup.10] + 2[(I - A[V.sub.n]).sup.11] + [(I - A[V.sub.n]).sup.12]]. (22)

Equation (22) yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (23)

A[[epsilon].sub.n+1] = [1/4][[(A[[epsilon].sub.n]).sup.10] + 2[(A[[epsilon].sub.n]).sup.11] + [(A[[epsilon].sub.n]).sup.12]]. (24)

Using (24), we attain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (25)

which simplifies, by taking norm from both sides, the following:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (26)

and consequently

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (27)

This shows that the method (16) converges to [A.sup.-1] with at least tenth order of convergence. The proof is now complete.

A simple corollary from Theorems 2 and 3 is that, using the same conditions and initial conditions, the higher order methods arrive at the convergence phase faster than lower order methods and this reduces the number of iterations.

Note that the sequence [{[V.sub.n]}.sup.n=[infinity].sub.n=0] of (16) may be applied to not only the left preconditioned linear system [V.sub.n]Ax = [V.sub.n]b but also the right preconditioned linear system A[V.sub.n]y = b, where y = [V.sub.n]x, only if the initial matrix satisfies A[V.sub.0] = [V.sub.0]A.

The iterative methods that have been discussed up to now are sensitive for choosing the initial guess to start the process. As a matter of fact, the high accuracy and efficiency of such types of iterative algorithms are guaranteed only if the initial value satisfies the appropriate condition given in Theorems 2 and 3.

Thus, in order to preserve the convergence order, we remind the reader that the efficient way of producing V0 as given in [11] is as follows: [V.sub.0] = [A.sup.T]/[[parallel]A[parallel].sub.1] [[parallel]A[parallel].sub.[infinity]]). Another adaptive way is [V.sub.0] = [??]I, where I is the identity matrix, and [??] [member of] R should be determined so that [parallel]I - [??]A[parallel] < 1.

Remark 4. The new iteration (16) reaches 10th order by using 8 matrix-matrix multiplications, while the schemes (1), (2), and (5) reach 2nd, 3rd, and and 4th orders, respectively, by consuming 2, 3, and 4 matrix-matrix multiplications. Hence, the contributed method has less computational cost than its competitors. This superiority will be clarified in Section 4. It should also be remarked that the convergence of any order for nonsingular square matrices is generated in Section 6 of Chapter 2 of the book [12], whereas the general way for the rectangular matrices is discussed in Chapter 5 of [9] and the recent paper [13]. In fact, in those constructions a convergence order [rho] will always be attained by [rho] times of matrix-matrix products, such as (1) which reaches the order 2 using two matrix-matrix multiplications.

Remark 5. Two important matters must be mentioned at this moment to ease up the perception of why a higher order (efficient) method such as (16) with 8 matrix-matrix products to reach at least the convergence order 10 is practical. First, by following the comparative index of informational efficiency index of inverse-finders [14], defined by E = [rho]/[theta], wherein [rho] and [theta] stand for the convergence order and the number of matrix-matrix products, the informational efficiency for (16), that is, 10/8 [approximately equal to] 1.25, beats its other competitive, 2/2 = 1 of (1), 3/3 = 1 of (2) and (4), and 3/4 = 0.75 of (3). And second, the significance of the new scheme will be displayed in the implementation of such schemes. To illustrate further, such iterations are totally dependent on the initial matrices. Though there are certain and efficient ways for finding [V.sub.0], in general such initial approximations take a high number of iterations to arrive at the convergence phase. On the other hand, each cycle of the implementation of such Schulz-type methods includes one stopping criterion based on the use of a matrix norm, and this would impose further burden and load in general for the low order methods in contrast to the high order methods such as (5). Because the computation of a matrix norm (usually [[parallel]x[parallel].sub.2] for dense matrices and [[parallel]x[parallel].sub.F] for large sparse matrices) takes a reasonable time, therefore higher number of steps/iterations (which is the result of lower order methods) will be costlier than the lower number of steps/iterations.

Remark 6. The index that we are defining is different from the classical efficiency index as defined in [15]. Traub in [15] discussed why informational efficiency index is needed, fn fact, the kind of efficiency index that the users apply depends on the situation which is dealt with. Note that the cost of each iteration (step) is governed by the number of matrix-matrix products, order, and computing of a stopping criterion. Thus, the informational index has meaning in this case, because it measures the gain brought each time a matrix-matrix product along with the order and the stopping criterion is computed.

4. Numerical Reports

In this section, some experiments are presented to demonstrate the capability of the suggested method. The computer algebra system MATHEMATICA 8, [16] and [17], has been used in this section. For numerical comparisons, we have used the methods (1), (2), (5), and (16). We will also use double precision in our calculations. The computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, CPU 3.20 GHz, and 4 GB of RAM.

Experiment 1. fn this test, 10 sparse random complex matrices of the dimension 2500 are considered as follows:

n = 2500; number = 10; Table[A[j] = SparseArray[[Band[{-100, 700}] ->RandomReal[], [i_, i_] :> 3.3,

Band[{500, -150], [n - 20, n - 25}] ->{RandomReal[], 10. + I], Band[{600, 250], [n - 100, n - 400}] ->[RandomReal[10], RandomReal[10]}}, [n, n], 0.], {j, number}];

In this test, the stopping criterion is [[parallel]I - [V.sub.n]A[parallel].sub.1] [less than or equal to] [10.sup.-6], and the maximum number of iterations allowed is set to 100. Note that in this test the initial choice has been constructed by [V.sup.0] = [A.sup.T]/([[parallel]A[parallel].sub.1] [[parallel]A[parallel].sub.[infinity]]). We also plot the condition number of the 10 test matrices in Figure 1.

The results of comparisons for the test problem have been presented in Figures 2-3. We have distinguished the curves by various symbols like circle, triangle, and so forth, alongside different colors. The attained results reverify the robustness of the proposed iterative method (16) by a clear reduction in the number of iterations and the elapsed time. Note that, in figures of this paper, Schulz, KSM, MM, and proposed method (PM) stand for (1), (2), (5), and (16), respectively.

In general, iterative Schulz-type methods are very useful for large sparse matrices having an sparse inverse or when only an approximate inverse is needed.

After a few iterations of such methods, the computed approximate inverse may be dense, and thus the whole procedure might be slow. To remedy this, a threshold can be imposed to the implemented algorithm. Hopefully, this can be done by the command Chop [exp, tol]. In Experiment 1, we have set the tol to [10.sup.-10]. This technique of implementation would be also fruitful for preconditioning.

We also point out that the construction of (16) is based on applying the nonlinear equation solver (similar idea to [3])

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (28)

on the matrix equation F(V) = [V.sup.-1] - A = 0, where, for example, f[[z.sub.n], [y.sub.n]] = [([z.sub.n] - [y.sub.n]).sup.-1] (f([z.sub.n]) - f([y.sub.n])) is the two-point divided difference, which gives us the aimed tenthorder method (16).

Remark 7. As discussed before, an important application of such solvers is to provide robust preconditioners for solving linear systems of equations. To illustrate further, we have chosen the following example carefully, in which, in a practical problem, the GMRES solver fails in solving the resulting system of a discretization, when the maximum number of steps allowed to the GMRES is 2000.

Experiment 2. We consider solving a boundary value problem (BVP) using finite difference discretization. Let us solve the BVP

[[[partial derivative].sup.2]u/[partial derivative][x.sup.2]] + f(x)u = g(x), u(0) = 0, u'(1) = 0, (29)

where f(x) = 1 + 100 exp(-[(321(x - 1/2)).sup.2]), g(x) = sin([pi]x), and the number of discretization in our range [0,1] is n = 1000.

The implementation to find the solution with the tolerance 10~6 would be (in the MATHEMATICA environment) as comes next:

n = 1000; h = 1./n; xgrid = h Range[1, n]; f[x_] := 1 + 100 Exp[-(321(x - 1/2)) [conjunction] 2]; g[x_] := Sin[Pi x]; id = SparseArray[{i_, i_} -> 1., {n, n}, 0.]; d2 = SparseArray[{{i_, i_} -> -2., {n, n - 1} -> 2., {i_, j_}/; Abs[i - j] == 1 -> 1.}, {n, n}, 0.]

Now the coefficient matrix and the right hand side vector for finding the solution of the BVP (29) in the sparse form can be deduced by

A = SparseArray[d2/h [conjunction] 2 + id f [xgrid]] b = g[xgrid];

Just like the other problems, this is the most time-consuming step, that is, to solve the sparse system Ax = b. Herein, we choose the Krylov subspace method GMRES to solve the system, without preconditioning and with [V.sub.5] produced from the iterative method (1), [V.sub.4] produced from the iterative methods (2) and (5), and [V.sub.2] produced from the iterative methods (16) to solve the left preconditioned system [V.sub.i]Ax = [V.sub.i]b.

In this way, the system will be well behaved and we expect to find the solution of the BVP (29) in less computational time than the nonpreconditioned GMRES. The results of consuming time for this purpose are given in Table 1. We should remark that in this test [V.sub.0] = diag(1/[a.sub.11], 1/[a.sub.22], ..., 1/[a.sub.nn]) has been chosen as the initial approximation based on [18], where [a.sub.ii] is the ith diagonal entry of A.

As is obvious from Table 1, the nonpreconditioned GMRES fails to converge even after 2000 iteration steps, while the required accuracy could be attained using preconditioning. The best time which beats all the other ways comes from the newscheme (16). The left preconditioned system resulting from (16), showed as (16)-PGMRES, is reliable in solving linear ill-conditioned systems.

Note that the time reported for PGMRES is the whole time of constructing the initial matrix, obtaining the preconditioner [V.sub.i] and solving the left preconditioned system [V.sub.i]Ax = [V.sub.i]b, by GMRES.

Experiment 3. In order to compare the preconditioners obtained from new method with the famous preconditioners of the literature resulting from incomplete LU factorizations, [1], we pay heed to solving the linear sparse systems Ax = b, of the dimension 841 using BiCGSTAB. The matrix A has been chosen from MatrixMarket database as A = ExampleData["Matrix", "Y0UNG1C"], while the right hand side vector is b = [(1, 1, ..., 1).sup.T]. The solution would be [(-0.0177027-0.006931711, -0.0228083-0.00589176I).sup.T]. Figure 4 denotes the plot of matrix A.

The left preconditioned system using [V.sub.3] of Schulz, [V.sub.2] of KSM, and V4 of the proposed method PM along with the well-known preconditioned techniques ILUO, ILUT, and ILUTP has been tested, while the initial vector has been chosen for all the cases automatically by the command of LinearSolve[] in MATHEMATICA 8. The results of consuming time comparisons for different tolerances (residual norms) have been listed in Figure 5.

Note that, after a few iterations, the computed preconditioner of Schulz-type methods maybe dense. Like before, we must choose a strategy to control the sparsity of the preconditioner. This here can be done by setting Chop[[V.sub.i], [10.sup.-6]], in our obtained approximate inverses.

5. Conclusions

In this work, we have developed an iterative method in inverse-finding of complex matrices belonging to the well-known class of Schulz-type methods.

We have proposed a simpler form of a recently published method (3) in the form (5) and proved analytically that the scheme possesses third-order convergence.

We have shown that the suggested method (16) reaches tenth order of convergence. We moreover have discussed that (16) can be considered for the left and right preconditioned systems under a certain condition. The efficacy of the new scheme was illustrated numerically using the computer programming package MATHEMATICA.

One may note that the approximate inverse obtained per step of Algorithm (5) or (16) can also easily be taken into account as a preconditioner to reduce the ill-conditioning of a system and let the users apply iterative methods such as GMRES or BICGSTAB in solving large scale sparse linear systems of algebraic equations efficiently.

http://dx.doi.org/10.1155/2014/563787

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

[1] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2nd edition, 2003.

[2] F. Soleimani, F. Soleymani, A. Cordero, and J. R. Torregrosa, "On the extension of Householder's method for weighted Moore-Penrose inverse," Applied Mathematics and Computation, vol. 231, pp. 407-413, 2014.

[3] G. Schulz, "Iterative Berechnung der Reziproken matrix," Zeitschrift fur Angewandte Mathematik und Mechanik, vol. 13, pp. 57-59, 1933.

[4] A. Ben-Israel and T. N. E. Greville, Generalized Inverses, Berlin, Germany, Springer, 2nd edition, 2003.

[5] M. Benzi, "Preconditioning techniques for large linear systems: a survey," Journal of Computational Physics, vol. 182, no. 2, pp. 418-477, 2002.

[6] H.-B. Li, T.-Z. Huang, Y. Zhang, V-P. Liu, and T.-V. Gu, "Chebyshev-type methods and preconditioning techniques," Applied Mathematics and Computation, vol. 218, no. 2, pp. 260-270, 2011.

[7] V. Y. Pan, M. Van Barel, X. Wang, and G. Codevico, "Iterative inversion of structured matrices," Theoretical Computer Science, vol. 315, no. 2-3, pp. 581-592, 2004.

[8] W. Li and Z. Li, "A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix," Applied Mathematics and Computation, vol. 215, no. 9, pp. 3433-3442, 2010.

[9] E. V. Krishnamurthy and S. K. Sen, Numerical Algorithms: Computations in Science and Engineering, Affiliated East-West Press, New Delhi, India, 2007.

[10] F. Toutounian and F. Soleymani, "An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix," Applied Mathematics and Computation, vol. 224, pp. 671-680, 2013.

[11] V. Y. Pan and R. Schreiber, "An improved Newton iteration for the generalized inverse of a matrix, with applications," SIAM: Journal on Scientific and Statistical Computing, vol. 12, no. 5, pp. 1109-1130, 1991.

[12] E. Isaacson and H. B. Keller, Analysis of Numerical Methods, John Wiley & Sons, New York, NY, USA, 1966.

[13] L. Weiguo, L. Juan, and Q. Tiantian, "A family of iterative methods for computing Moore-Penrose inverse of a matrix," Linear Algebra and Its Applications, vol. 438, no. 1, pp. 47-56, 2013.

[14] F. Soleymani, "A fast convergent iterative solver for approximate inverse of matrices," Numerical Linear Algebra with Applications, vol. 21, pp. 439-452, 2014.

[15] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, New York, NY, USA, 1964.

[16] http://reference.wolfram.com/language/tutorial/LinearAlgebra- InMathematicaOverview.

[17] S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003.

[18] L. Grosz, "Preconditioning by incomplete block elimination," Numerical Linear Algebra with Applications, vol. 7, no. 7-8, pp. 527-541, 2000.

M. Kafaei Razavi, (1) A. Kerayechian, (1) M. Gachpazan, (1) and S. Shateyi (2)

(1) Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran

(2) Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, Private Bag X5050, Thohoyandou 0950, South Africa

Correspondence should be addressed to S. Shateyi; Stanford.shateyi@univen.ac.za

Received 24 May 2014; Revised 25 July 2014; Accepted 28 July 2014; Published 14 September 2014

Academic Editor: Juan R. Torregrosa

TABLE 1: Comparison of the computational time in solving the
linear system Ax = bin Experiment 2.

Methods     GMRES   (l)-PGMRES   (2)-PGMRES   (5)-PGMRES   (16)-PGMRES

The whole   Fail       9.21         7.31         4.76         3.15
  time
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Razavi, M. Kafaei; Kerayechian, A.; Gachpazan, M.; Shateyi, S.
Publication:Abstract and Applied Analysis
Article Type:Report
Date:Jan 1, 2014
Words:4343
Previous Article:The FMM-BEM method for the 3D particulate stokes flow.
Next Article:On the gauss map of surfaces of revolution with nonlightlike axis in Minkowski 3-space.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |