Printer Friendly

Two-level nonlinear elimination based preconditioners for inexact Newton methods with application in shocked duct flow calculation.

1. Introduction. In [13], an interesting nonlinear elimination algorithm (NE) was introduced for solving large sparse nonlinear systems of equations whose solution is badly scaled in part of the computational domain. The key idea of NE is to implicitly remove these components and obtain a better balanced system for which the classical inexact Newton method can be applied. The technique works extremely well for some relatively simple cases, and several recent attempts motivated by this technique have been made in order to make inexact Newton methods work for other nonlinear systems [3,4, 5, 6, 7, 10, 11, 12]. In NE, an important step is to iteratively eliminate the identified bad component using a subdomain Newton method, which by itself may fail or take too many iterations to converge. In [13] it was suggested that the nonlinear elimination algorithm can be used in a nested fashion, i.e., NE can be used inside the outer NE when the regular inexact Newton fails to converge in the implicit removing step for the subnonlinear system. Even though the idea of nested NE is simple, it has never been studied and to actually realize it is quite difficult. The aim of this paper is to formulate a two-level NE and embed it into the classical inexact Newton methods, which can be interpreted as a nonlinear Schur complement algorithm.

We briefly recall the classical inexact Newton algorithm with backtracking (INB) [8, 9], which is used as the basic building block of our algorithms for the global and some subnonlinear systems. Consider a given nonlinear function F(x): [R.sup.n] [right arrow] [R.sup.n]. We are interested in finding a vector [x.sup.*] [member of] [R.sup.n], such that

(1.1) F ([x.sup.*] ) = 0,

starting from an initial guess [x.sup.(0)] [member of] [R.sup.n]. Here F = [[F.sub.1],..., [F.sub.n]]T, [F.sub.i] = [F.sub.i]([x.sub.1],..., [x.sub.n]), and x = [[x.sub.1],..., [x.sub.n]].
ALGORITHM 1.1 (Inexact Newton with Backtracking (INB)).
Given [x.sup.(0)]

Evaluate F([x.sup.(0)]) and [parallel]F([x.sup.(0)])[parallel]

Set k = 0

While ([parallel]F([x.sup.(k)])[parallel] >[[epsilon].sub.i]
[parallel]F([x.sup.(0)][parallel]) and ([parallel]F([x.sup.(k)])
[parallel] > [[epsilon].sub.2]) do

 Compute the Jacobian matrix F'([x.sup.(k)])
 Inexactly solve the Jacobian system F'([x.sup.(k)])[s.sup.(k)]
 = -F([x.sup.(k)])
 Update [x.sup.(k+1)] = [x.sup.(k)] + [[lambda].sup.(k)]
 [s.sup.(k)], where [[lambda].sup.(k)] [member of] (0,1] is
 determined to


 Set k = k + 1

End While

Here [[epsilon].sub.1] and [[epsilon].sub.2] are the relative and absolute stopping conditions. For the applications that we are interested in, n is usually large, and in this case, the algorithm has three expensive operations: the evaluation of F(x), the construction of the Jacobian matrix, and the solution of the Jacobian system. It is important to note that all three operations are global in the sense that all components of x and F are involved in all three operations. However, as observed in many numerical experiments, the trigger of these expensive "all components involved" operations is often local. In other words, only a small number of [F.sub.1], [F.sub.2],..., [F.sub.n] are large and these "bad components" are not random, they are often associated with certain interesting physics of the solution of the PDE. For example, in the shocked duct flow problem that we are looking at, all these "bad components" are associated with the shock wave located in a small region inside the computational domain. In other applications, they may be associated with a boundary layer or other local singularities [13, 14, 16]. NE is a subproblem solver inside a global INB that is designed to smooth out these "bad components" so that the total number of global INB is reduced.

The rest of the paper is organized as follows. In Section 2, we formulate the multilevel NE algorithm. We describe a shocked duct flow problem in Section 3. Some numerical results and concluding remarks are given in Sections 4 and 5, respectively.

2. Multilevel nonlinear elimination algorithms. We begin with the one-level nonlinear elimination algorithm. The first step is to split the residual components, [F.sub.1]; [F.sub.2],[F.sub.n], into two sets consisting of the "bad components" to be eliminated and the good components to be solved by the classical inexact Newton algorithm. Let I = {1, 2,..., n} be an index set, i.e. one integer for each unknown x4 and each residual function [F.sub.i]. Assume that [S.sup.b.sub.1] ("b" for bad) is a subset of I with m components and [S.sup.g.sub.1] ("g" for good) with (n - m) components is its complement; that is,

I = [S.sup.b.sub.1] [union] [S.sup.g.sub.1].

Usually m << n. For this partition, we define two subspaces,




respectively, and the corresponding restriction operators, [R.sup.b.sub.1] and [R.sup.g.sub.1], which transfers data from [R.sup.n] to [V.sup.b.sub.1] and [V.sup.g.sub.1], respectively. Here, we use the subscript 1 to indicate the partition, the subspaces, and the restriction/interpolation operators at the first level, which is used to distinguish the corresponding ones defined later at the second level. Using the restriction operator R \, we define the sub-nonlinear function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] as


For any given x [member of] [R.sup.n], [T.sup.b](x): [R.sup.n] [right arrow] [V.sup.b.sub.1] is defined as the solution of the following subspace nonlinear system,


Using the subspace mapping functions, we introduce a new global nonlinear function,

y = G(x) [equivalent to] [R.sup.g.sub.1]x + [T.sup.b](x).

Note that for a given x, the evaluation of G(x) is not straightforward. A nonlinear system corresponding to the subspace [V.sup.b.sub.1] has to be solved using either the classical INB algorithm restricted to the subspace [V.sup.b.sub.1] or a NE algorithm in the subspace [V.sup.b.sub.1]. Let us summarize the above procedure as the following algorithm.
ALGORITHM 2.1 (Evaluate y = G(x)).

If flag = 0 then y = x else

 If flag =1: one-level nonlinear elimination:

 Solve (2.1) by INB using R 1 x as an initial guess.

 If flag = 2: two-level nonlinear elimination:

 Solve (2.1) by one-level NE using R 6x as an initial guess.

end if

Compute y = [R.sup.g.sub.1] x + [T.sup.b] ( x)

Here flag is an input parameter from somewhere else in the algorithm to indicate if a nonlinear elimination is needed and if one-level or two-level NE is to be called. Two-level NE becomes necessary when the local problem (2.1) is still too difficult to solve by INB, and in this case, another partition of the index set [S.sup.b.sub.1] into two subsets is needed, i.e., [S.sup.b.sub.1] = [S.sup.b.sub.2] [union] [S.sup.g.sub.2]. At the second level for the subset [S.sup.b.sub1] two subspaces of [R.sup.n], [V.sup.b.sub.2] and [V.sup.g.sub.2], the corresponding restriction operators, [R.sup.b.sub.2] and [R.sup.g.sub.2], as well as sub-nonlinear function [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] can all be defined in a manner similar to the ones at the first level.

Now NE in conjunction with INB can be realized with the following algorithm.

Given [x.sup.(0)]. Set k = 0 and flag = 1

Compute [y.sup.(0)] = G([x.sup.(0)]).

Evaluate F([y.sup.(0)]) and [parallel]F([y.sup.(0)])[parallel]

While ([parallel]F([y.sup.(k)])[parallel] > [[epsilon].sub.1]
 [parallel]F([y.sup.(0)][parallel]) and
 ([parallel]F([y.sup.(k)])[parallel] > [[epsilon].sub.2]) do

 Compute F'([y.sup.(k)])

 Inexactly solve F'([y.sup.(k)] )[s.sup.(k)]
 = -F([y.sup.(k)])

 Update [x.sup.(k+1)] = [x.sup.(k)] + [[lambda].sup.(k)]
 [s.sup.(k)], where [[lambda].sup.(k)] is determined to


 Compute [y.sup.(k+1)] = G([x.sup.(k+1)])

 Evaluate F([y.sup.(k+1)]) and [parallel]F([y.sup.(k+1)]

 If [parallel]F([y.sup.(k)])[parallel] < [[epsilon].sub.S]
 [parallel]F([y.sup.(0)])[parallel] then flag = 0

 Set k = k + 1

End While

INB-NE can be interpreted as follows. Find the solution [y.sup.*] [member of] [R.sup.n] of (1.1) by solving a right nonlinearly preconditioned system, F(G( [x.sup.*])) = 0. Once [x.sup.*] is found, the solution of the original system can be obtained as [y.sup.*] = G([x.sup.*]). It was shown theoretically in [13] that under certain assumptions, INB-NE possesses local quadratic convergence provided that the subspace nonlinear problems (2.1) are solved exactly. Note that if F(x) is linear, i.e.,




As a consequence, solving F(G( x)) 0 is mathematically equivalent to decoupling it into two steps: first, solve the reduced system, U([R.sup.g.sub.1]x) = g - [FB.sup.-1] f, where U = C - [FB.sup.-1]E is the Schur complement matrix, and then compute [R.sup.b.sub.1]x = -[B.sup.-1]E([R.sup.g.sub.1]x) + [B.sup.-1]f. However, rather than solving the Schur complement system, in practice, it is desirable and often more efficient to solve the full system, since the Schur complement is denser and good preconditioners may not be available.

Note that in our approach the subproblem corresponding to the bad components is simply a restriction of the global nonlinear system to the subdomain, not a Schur complement of the global system with respect to the subdomain consisting of the bad components. The Schur complement approach is considerably more expensive and is not studied in this paper.

Since the extra function evaluations of G(x) are needed, NE is intended for the cases in which INB fails to converge or experiences unacceptably slow convergence. As suggested by [13, bottom of p. 555], when the intermediate solution is close to the exact solution, NE is switched back INB by letting G(x) = x. The switching condition is controlled by [[epsilon].sub.3] in Algorithm 2.2.

3. A shocked duct flow problem. Compressible flows passing through a diverge-converge duct are governed by the compressible Navier-Stokes equations [1, 2]. Instead of solving Navier-Stokes equations, we consider a simpler model problem, a quasi-one-dimensional full potential problem [6, 16] defined on the interval, 0 [less than or equal to] x [less than or equal to] 2, as


where A(x) is the area of the cross-section of the duct at x,

A(x) = 0.4 + 0.6[(x - 1).sup.2],

and the density function is described as


Here [gamma] = 1.4 is the specific heat for air, u = [[phi].sub.x] is the flow velocity, and c is the speed of sound. See the left figure of Fig. 3.1 for the geometric configuration of the shocked duct flow problem. Although this problem looks quite simple, it is still considered as a difficult test problem for the convergence of inexact Newton methods because the solution has a strong shock as the value of [[phi].sub.R] becomes larger than 1.15 in the domain; see Fig. 3.1 (right). The flow is supersonic at the points in the interval (0,2), where the Mach number, M = [absolute value of u]/c, is greater than 1.


To approximate (3.1) by a standard finite difference method, we begin by introducing a uniform grid, 0 = [x.sub.0] < [x.sub.1] < [x.sub.2] < ... < [x.sub.n] = 2, with the grid size h = 2/n. Let [PHI] = [[[[phi].sup.h.sub.1], [[phi].sup.h.sub.2], xxx, [[phi].sup.h.sub.n-1]].sup.T] be the numerical approximations at these interior grid points. We define points, [x.sub.1-1/2] and [x.sub.i+1/2], as the midpoints of subintervals [[x.sub.i-1], [x.sub.i]] and [[x.sub.i]; [x.sub.i+1]], respectively. Consider the subinterval [[x.sub.1-1/2], [x.sub.1+1/2]]. We approximate [(A[rho]([[phi].sub.x])[[phi].sub.x]).sub.x] at the point [x.sub.i] using a second-order centered finite difference method, i.e.,



For purely subsonic flows, using (3.2) for calculating the flow density is sufficient. However, for transonic flows, this formulation needs to be modified in order to capture the shock. By applying a first-order density upwinding scheme as suggested by Young et al. [14, 16], a modified flow density value at the point [x.sub.i+1/2] is expressed as


where the switching parameter [[??].sub.i + 1/2] is defined as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. Here [M.sub.c] is called the cutoff Mach number and [M.sub.i+1/2] is the numerical Mach number at [x.sub.i+1/2] given by


In summary, the discrete shocked duct flow problem can be written as a large sparse nonlinear system of algebraic equations,

(3.3) F ([PHI]) = 0,

where F([PHI]) = [[[F.sub.1]([PHI]), [F.sub.2]([PHI]), xxx, [F.sub.n]([PHI])].sup.T] with [F.sub.i] defined as


In our implementation, the Jacobian matrix of F([PHI]) is constructed approximately by using the forward finite differences. Note that for the case of purely subsonic flows, the formulation (3.3) leads to a symmetric, weakly diagonally dominant, tridiagonal Jacobian matrix, while for the case of transonic flows the associate Jacobian is nonsymmetric due to the derivative of the upwinding density coefficient [[??].sub.i[+ or -]1/2] corresponding to the supersonic region.

4. Numerical experiments and observations. In this section, we present some numerical results for solving the shocked duct flow problem (3.3) using the classical inexact Newton method and the new algorithm. The stopping condition for Newton is


and a linear initial guess that interpolates the boundary conditions is used for Newton for all test cases. In our implementation of the classical inexact Newton method with backtracking, as described in Algorithm 1.1, a right preconditioned GMRES [15] is used for solving the global Jacobian system with zero initial guess. The stopping condition for GMRES is


Here [eta] = [10.sup.-6] and [M.sup.-1.sub.k] is a block Jacobi preconditioner constructed using the matrix F'([x.sup.(k)]). In the tests, we partition the computational domain into 15 non-overlapping subdomains and therefore [M.sup.-1.sub.k] has 15 blocks. The global Newton step is updated by

[x.sup.(k+1)] = [x.sup.(k)] + [[lambda].sup.(k)] .

The step length, [[lambda].sup.(k)] [member of] [[[lambda].sub.min], [[lambda].sub.max]] c (0,1], is selected so that


where the two parameters [[lambda].sub.min] and [[lambda].sub.max] act as safeguards, which are required for strong global convergence and the parameter a is used to assure that the reduction of [parallel]F[parallel] is sufficient. Here, a quadratic linesearch technique [8] is employed to determine the step length [[lambda].sup.(k)], with a = [10.sup.-4], [[lambda].sub.min] = 1/10 and [[lambda].sub.max] = 1/2.

In the implementation of the new algorithm, two more nested Newton solvers are needed. We simply use the inexact Newton just described for all nonlinear solvers.

4.1. Classical inexact Newton. We first show some results using the inexact Newton methods with and without backtracking for solving the problem. In Table 4.1, we show the number of Newton iterations on three grids of size 1/64, 1/128 and 1/256 with four different boundary conditions, [[phi].sub.R] = 0.5,1.0,1.15,1.18. For this set of tests, the inexact Newton without backtracking fails to converge when the grid is fine and [[phi].sub.R] is large, but INB converges in all cases. However, the stronger the shock wave is the more INB iterations are needed for convergence. In Fig. 4.1 (left), we show the convergence history of INB on the three different grids. For all cases, INB converges rapidly at the beginning (the nonlinear residual is reduced by more than one order of magnitude in the first few iterations) and then stagnates for a while before exhibiting the quadratic convergence behavior. Clearly, the finer the grid, the longer the stagnation period becomes. To understand how INB updates the intermediate solution during the stagnation period, we focus on the case with grid size equals to 1/128. INB takes 223 steps to converge, and the 11 selected Mach curves corresponding to the computed velocities are shown in Fig. 4.1 (right). It is interesting to observe that at most grid points the solution convergence happens after the second INB iteration, and the rest of the INB iterations are devoted exclusively for grid points near the shock. Note that, practically speaking, after the second INB, the Newton corrections are needed only in the neighborhood of the shock, but the Newton calculations (including the nonlinear residual evaluation and the Jacobian solve) are actually carried out for the whole computational domain. This is clearly a waste of computation!


To further understand the situation from an algebraic viewpoint, we partition the interval [OMEGA] = (0.0, 2.0) into three subintervals, [[OMEGA].sub.1] = (0.0,0.8), [[OMEGA].sub.2] = (0.8,1.3), and [[OMEGA].sub.3] = (1.3,2.0), with the middle interval contains the shock. Correspondingly, we partition the nonlinear vector-valued function F (x) into three pieces, [F.sub.1] (x) for the subinterval to the left of the shock neighborhood, [F.sub.2] (x) for the subinterval containing the shock neighborhood, and [F.sub.3] (x) for the subinterval to the right of the shock neighborhood. Note that the solution components at the grid points in [[OMEGA].sub.1] [union] [[OMEGA].sub.3] represent the good components, while the ones in [[OMEGA].sub.2] correspond to the bad components. In Fig. 4.2, we show the residual of a test run on a h = 1/128 grid using INB. We include the history of the complete residual [parallel]F[parallel], the smooth part of the residual [square root of ([[parallel][F.sub.1][parallel].sup.2] + [[parallel][F.sub.3][parallel].sup.2], and the non-smooth part of the residual [parallel] [F.sub.2] [parallel]. It is important to note that the residual is completely dominated by [F.sub.2]; the two curves virtually sit on top of each other in Fig. 4.2.


4.2. One-level INB-NE. On the other hand, in Fig. 4.3, we show the residual of the same test case on a h = 1/128 grid as in Fig. 4.2 by using the one-level INB-NE algorithm (INB-NE1), in which the bad component near the shock is eliminated with an inner Newton iteration. It is clear that, after the elimination, the component [F.sub.2] is no longer the dominant term in the overall residual, and the convergence of the outer Newton takes only 5 iterations.


When using INB-NE1, a key question is how to properly pick the bad components. In Table 4.2, we show the number of iterations with different choices of the "bad" interval. Note that the shock is located at the point near x = 1.2. As mentioned before, solving the local problem exactly is essential for the fast convergence of INB-NE1. In practice, from our numerical experiences, the elimination calculation has to be carried out with a certain degree of accuracy. Otherwise, INB-NE1 may fail to converge. Hence, for the results in Table 4.2, we use the following stopping condition,


for the nonlinear system, and


for the subdomain Jacobian system, which is solved by GMRES without preconditioning. Table 4.2 suggests that the appropriate subinterval of the "bad" components should include all grid points near the location of the duct throat and the shock.

To make the INB-NE algorithm more efficient, it is wise to sometimes switch to the outer INB iteration from the inner elimination iteration. This is controlled by the parameter [[epsilon].sub.3] in Algorithm 2.2. In Table 4.3, we show the number of INB iterations in the bad subdomain with different values of [[epsilon].sub.3]. When [[epsilon].sub.3] is too large, more outer iterations is needed, but below a certain value, it is simply a waste of computation. In Fig. 4.4, we show the history of Mach number distribution curves corresponding to both [x.sup.(k)] and [y.sup.(k)]. In Algorithm 2.2, [x.sup.(k)] is the solution from the outer Newton iteration and [y.sup.(k)] is the modified [x.sup.(k)] with the subspace correction. Note that NE correctly detects the location of the shock at the second iteration, but it takes 77 iterations to solve the local problem. Clearly after the subspace correction, the sequence [y.sup.(k)] converges quickly to the desired solution. Hence, the switching should take place as soon as the location of the shock is detected. For experiments in the rest of the section, we set [[epsilon].sub.3] = [10.sup.-4].

Table 4.4 summarizes the number of outer Newton iterations, the average number of inner Newton iterations, and the average number of GMRES iterations for solving the global Jacobian systems in INB-NE1. We observe that once the bad components are removed the total number of outer INB iterations stays small regardless the size of the grid and the right boundary condition, which controls the strength of the shock.


4.3. Two-level INB-NE. When using the one-level algorithm, as discussed in the previous subsection, for some cases, the subspace Newton solver may need many iterations (e.g. the cases of h = 1/256 with [[phi].sub.R] = 1.15 and 1.18 in Table 4.4) and sometimes the subspace Newton may even fail to converge. In this situation, one may want to use the one-level algorithm recursively, i.e., further partition the subdomain into "good" and "bad" sub-subdomains and then introduce a third Newton solver in the "bad" sub-subdomain. For both levels, we use the following stopping condition,


for the nonlinear systems, and


for the subdomain Jacobian systems, which are solved by GMRES without preconditioning and with a zero initial guess. The notation "*" represents 1: 1st level or 2: 2nd level. In Table 4.5, we show the number of iterations of three nested INB iterations for different choices of subintervals at each level. How to choose an appropriate subinterval at each level is mostly empirical. From our numerical experiences, the subinterval at the second level should be covered by that at the first level and both subintervals should include the shock and the throat. Note that with this additional level of nonlinear elimination, the number of INB iterations in the first level can be kept small. Fig. 4.5 shows the history of Mach number curves corresponding to x and the original solution y. Similar to INB-NE1, with the help of NE2, INB correctly detects the location of the shock at the 1 st iteration, and it takes only 5 NE iterations to solve the local problem.

5. Concluding remarks and future work. When solving nonlinear system of algebraic equations using inexact Newton methods, the convergence is often determined by a small number of equations in the system that are much more nonlinear than the others. In the paper, we developed several methods that implicitly eliminate these highly nonlinear components through an approximate inner subdomain Newton iterations. The number of outer Newton iterations, which are considerably more expensive than the inner subdomain iterations, can be drastically reduced if the highly nonlinear components are correctly identified and sufficiently removed. A shocked duct flow problem was carefully studied. For this problem, all of the bad components of the nonlinear system are near the shock wave, and our numerical results showed that once these bad components are approximately removed, the number of outer Newton iterations is reduced from over 200 to just 5 for a particular example on a h = 1/128 grid. A two-level version of the algorithm was also introduced using a combination of the idea of two-level nonlinear elimination and classical inexact Newton-type methods.


The focus of the paper was on the convergence of the one-level and two-level algorithms and we did not discuss anything related to computing time. As a future project, we will consider the extension of the algorithms to two and three dimensional spaces. In INB-NE, a judicious choice of the bad subspace is crucial for fast convergence of Newton methods. In the shocked duct flow problem, as illustrated numerically in the previous section, this choice depends on the location of the shock. We know where these bad components physically are and we observe that they do not move during the solution process, hence Algorithm 2.2 can be employed. In practice, it may not always be possible to determine beforehand which subsystem to be eliminated. Therefore, it is necessary to develop a domain decomposition version of Algorithm 2.2 [7], which extends NE, where a single local problem is considered, to multiple local problems. The new nonlinear domain decomposition based algorithm is able to identify this subspace without already knowing the solution profile and it is more suitable for large scale parallel processing.

* Received August 31, 2009. Accepted for publication March 24, 2010. Published online July 15, 2010. Recommended by M. Gander. The first two authors were supported in part by the National Science Council of Taiwan, 96-2115-M-008-007-MY2 and the last author was supported in part by the Department of Energy, DE-FC-02 06ER25784, and in part by the National Science Foundation, ACI-0305666, CCF-0634894, and CNS-0722023.


[1] J. D. ANDERSON, JR., Fundamentals of Aerodynamics, McGraw-Hill, New York, NY, 1991.

[2] J. D. ANDERSON, JR., Computational Fluid Dynamics: The Basics with Applications, McGraw-Hill, New York, NY, 1995.

[3] X.-C. CAI, W. D. GROPP, D. E. KEYES, R. G. MELVIN, AND D. P. YOUNG, Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation, SIAM J. Sci. Comput., 19 (1998), pp. 246-265.

[4] X. -C. CAI AND D. E. KEYES, Nonlinearly preconditioned inexact Newton algorithms, SIAM J. Sci. Comput., 24 (2002), pp. 183-200.

[5] X.-C. CAI, D. E. KEYES, AND L. MARCINKOWSKI, Nonlinear additive Schwarz preconditioned and applications in computational fluid dynamics, Internat. J. Numer. Methods Fluids, 40 (2002), pp. 1463-1470.

[6] X. -C. CAI, D. E. KEYES, AND D. P. YOUNG, A nonlinear additive Schwarz preconditioned inexact Newton method for shocked duct flow, in Domain Decomposition Methods in Science and Engineering, N. Debit, M. Garbey, R. Hoppe, J. Periaux, D. E. Keyes, and Y. Kuznetsov, eds., CIMNE, Barcelona, 2002, pp. 343-350.

[7] X. -C. CAI, Nonlinear overlapping domain decomposition methods, in Domain Decomposition Methods in Science and Engineering XVIII, M. Bercovier, M. J. Gander, R. Kornhuber, and O. Widlund, eds, Lect. Notes Comput. Sci. Eng., Vol. 70, Springer, Heidelberg, 2009, pp. 217-224.

[8] J. DENNIS AND R. SCHNABEL, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM, Philadelphia, PA, 1996.

[9] S. C. EISENSTAT AND H. F. WALKER, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput., 17 (1996), pp. 16-32.

[10] F.-N. HWANG AND X.-C. CAI, Improving robustness and parallel scalability of Newton method through nonlinear preconditioning, in Domain Decomposition Methods in Science and Engineering, R. Kornhuber, R. Hoppe, D. E. Keyes, J. Periaux, O. Pironneau, and J. Xu, eds., Lect. Notes Comput. Sci. Eng., Vol. 40, Springer, Heidelberg, 2004, pp. 201-208.

[11] F. -N. HWANG, X.-C. CAI, A parallel nonlinear additive Schwarz preconditioned inexact Newton algorithm for incompressible Navier-Stokes equations, J. Comput. Phys., 204 (2005), pp. 666-691.

[12] F. -N. HWANG AND X.-C. CAI, A class ofparallel two-level nonlinear Schwarz preconditioned inexact Newton algorithms, Comput. Methods Appl. Mech. Engrg., 196 (2007), pp. 1603-1611.

[13] P. J. LANZKRON, D. J. ROSE, AND J. T. WILKES, An analysis of approximate nonlinear elimination, SIAM, J. Sci. Comput., 17 (1996), pp. 538-559.

[14] D. P. YOUNG, R. G. MELVIN, M. B. BIETERMAN, F. T. JOHNSON, S. S. SAMANT, AND J. E. BUSSOLETTI, A locally refined rectangular grid finite element methods: Application to computational fluid dynamics and computational physics, J. Comput. Phys., 92 (1991), pp. 1-66.

[15] Y. SAAD AND M. H. SCHULTZ, GMRES: A generalized minimal residual algorithm for solving nonsysmetric linear systems, SIAM J. Sci. Stat. Comp., 7 (1986), pp. 856-869.

[16] D. P. YOUNG, W. P. HUFFMAN, R. G. MELVIN, C. L. HILMES, AND F. T. JOHNSON, Nonlinear elimination in aerodynamic analysis and design optimization, in Large-Scale PDE-Constrained Optimization, L. T. Biegler, O. Ghattas, M. Heinkenschloss, and B. van Bloemen Waanders, eds., Lecture Notes in Comput. Sci., Vol. 30, Springer, New York, 2003, pp. 17-44.

FENG-NAN HWANG ([dagger]), HSIN-LUN LIN ([dagger]), AND XIAO-CHUAN CAI ([double dagger])

([dagger]) Department of Mathematics, National Central University, Jhongli City, Taoyuan County 32001, Taiwan (,

([double dagger]) Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80309, USA (
A comparison of the number of iterations of inexact Newton
without backtracking (IN) and INB. 'Div.' means divergence.

grid sizes (h) 1/64 1/128 1/256 1/64 1/128 1/256

[PHI]R = 0.50 3 3 3 3 3 3
[PHI]R = 1.00 6 6 6 4 4 4
[PHI]R = 1-15 13 22 Div. 46 223 735
[PHI]R = 1-18 14 25 Div. 83 278 1009


Subinterval selection for INB-NE1: h = 1/128,
[[epsilon].sub.3] = [10.sup.-6].

subinterval its

[0.8, 1.0] 144
[0.8, 1.1] 92
[0.8, 1.2] 6
[0.8, 1.3] 6
[0.8, 1.4] 6
[0.8, 1.5] 8
[0.9, 1.0] 162
[0.9, 1.1] 115
[0.9, 1.2] 8
[0.9, 1.3] 7
[0.9, 1.4] 5
[0.9, 1.5] 5
[1.0, 1.1] 167
[1.0, 1.2] 9
[1.0, 1.3] 7
[1.0, 1.4] 7
[1.0, 1.5] 7
[1.1, 1.2] 40
[1.1, 1.3] 40
[1.1, 1.4] 41
[1.1, 1.5] 24
[1.2, 1.3] 208
[1.2, 1.4] 197
[1.2, 1.5] 154

The inner INB iteration numbers in INB-NE1 algorithm with different
[[epsilon].sub.3]. Subinterval: [0.8, 1.3]. h = 1/128.
[[PHI].sub.R] = 1.15.

 inner INB its
NE iteration [[epsilon].sub.3] = [[epsilon].sub.3] =
 [10.sup.-2] [10.sup.-4]

0 3 3
1 5 5
2 77 77
3 32 32
4 0 34
5 0 0
6 0

 inner INB its
NE iteration [[epsilon].sub.3] =

0 3
1 5
2 77
3 32
4 34
5 34


A summary of the number of the outer INB iterations, the average number
of inner Newton iterations per outer INB iteration, and the average
number of GMRES iterations for solving the global Jacobian systems in
INB-NE1 with [[epsilon].sub.3] = [10.sup.-4].

 outer INB its, ave. inner INB its (ave. GMRES its)
([[PHI].sub.R]) subinterval h = 1/64 h = 1/128

0.5 [0.8,1.3] 3,3.0(30.0) 3,3.0 (30.0)
1.00 [0.8,1.3] 4,3.8 (30.0) 4,3.8 (30.0)
1.15 [0.8,1.3] 5,10.4(31.2) 5,30.2 (31.2)
1.18 [0.8,1.7] 5,17.6(32.4) 5,53.0 (32.2)

 outer INB its, ave. inner INB its (ave. GMRES its)
([[PHI].sub.R]) h = 1/256

0.5 3, 3.0 (30.0)
1.00 4, 3.8 (30.0)
1.15 6,99.8 (32.7)
1.18 9,212.4 (33.6)


2nd level subinterval selections for INB-NE2. The numbers in the table
are the number of 1st level NE iter-ations, the average number of 2nd-
level INB iterations, and the average number of the inner-most INB
iterations. Note that the duct throat is located at x = 1.0 and the
shock is at around x = 1.2. h = 1-128, [[PHI].sub.R] = 1.15, and
[[epsilon].sub.3] = [10.sup.-4]

1st level
subinterval 2nd level subinterval

[0.5,1.5] [0.8,1.3] [0.9, 1.3] [1 .0, 1.3]
 4,4.8,25.8 4,4.8,18.7 4,5.5,15.7
[0.8,1.3] [0.85,1.25] [0.95,1.25] [1.05, 1.25]
 5,6.4,37.7 5,4.0,22.1 5,5.0,17.0
[1.0,1.3] [1.05,1.15] [1.1, 1.15] [1.10, 1.25]
 7,8.2,17.3 7,17.3,11.8 7,13.3,23.9
COPYRIGHT 2010 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Hwang, Feng-Nan; Lin, Hsin-Lun; Cai, Xiao-Chuan
Publication:Electronic Transactions on Numerical Analysis
Article Type:Report
Date:Jan 1, 2010
Previous Article:A weakly over-penalized symmetric interior penalty method for the biharmonic problem.
Next Article:Accumulation of global error in Lie group methods for linear ordinary differential equations.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |