Printer Friendly

A multigrid algorithm for solving the multi-group, anisotropic scattering Boltzmann equation using first-order system least-squares methodology.

Abstract. This paper describes a multilevel algorithm for solving the multi-group, anisotropic scattering Boltzmann equation formulated with a first-order system least-squares methodology. A [P.sub.n] - h finite element discretization is used. The resulting angle discretization of this [P.sub.n] approach does not exhibit the so-called "ray effects," but this discretization leads to a large coupled system of partial differential equations for the spatial coefficients, and, on scaling the system to achieve better approximation, the system coupling depends strongly on the material parameters. Away from the thick, low absorptive regime, a relatively robust multigrid algorithm for solving these spatial systems will be described. For the thick, low absorptive regime, where an incompressible elasticity-like equation appears, an additive/ multiplicative Schwarz smoother gives substantial multigrid improvement over standard nodal smoothers. Rather than using higher-order or Raviart-Thomas finite element spaces, which lead to complicated implementation, only low-order, conforming finite elements are used. Numerical examples illustrating almost h-independent convergence rates and locking-free discretization accuracy will be given.

Key words. Boltzmann transport equation, first-order system least-squares, multigrid method. AMS subject classifications. 65N30, 65N55, 65N15.

1. Introduction. Let R x [S.sup.2] x [epsilon] be the Cartesian product of a bounded domain R [subset] [R.sup.3], the unit sphere [S.sup.2], and a bounded, non-negative interval E: The time-independent Boltzmann transport equation is

(1.1) [[OMEGA] * [nabla] + [[sigma].sub.t] (x,E)][psi](x,[OMEGA],E) = [integral] dE' [integral] D[OMEGA]' [[sigma].sub.s](x', E' [right arrow] E, [OMEGA] * [OMEGA]') [psi] (x',[OMEGA]',E') + q(x,[OMEGA],E) (x, [OMEGA],E) [member of] R x [S.sup.2] x [epsilon]

with boundary condition

(1.2) (x,[OMEGA],E) = g(x,[OMEGA],E) n * [OMEGA] < 0, x [member of] [partial derivative]R.

This equation models the transport of particles through an inhomogeneous medium. In particular, the Boltzmann equation is well known to model the transport of neutrons/photons. In this case, [psi] is the angular flux, [[sigma].sub.t] and [[sigma].sub.s] are respectively the medium's total and scattering cross-sections ([[sigma].sub.a] := [[sigma].sub.t] - [[sigma].sub.s] is the absorption cross-section), q is the external source, and E is the energy of the particles. The integral source term describes the scattering of particles into different directions and energies.

Unfortunately, solving the linear Boltzmann equation is difficult: standard numerical schemes can be inaccurate and computationally inefficient. For example, the popular [S.sub.n] angle discretization (collocation in angle, [14]) produces the so-called ray effects which pollute the numerical solution. This pollution can be viewed mathematically as "contamination" from a poorly chosen finite element space for the angle component of the discretization-- i.e., collocation in angle is equivalent to discretization with delta basis functions, which form a poor approximating basis set. Fortunately, a [P.sub.n] discretization, which uses a better approximating basis set (i.e., spherical harmonics), eliminates these ray effects. But solving for the expansion coefficients or "lm moments" is difficult. The coefficients couple strongly with each other, creating a strongly coupled system of partial differential equations (PDE's); numerical algorithms for solving such strongly coupled systems are difficult to develop. In this paper, novel algorithms for solving this coupled system are presented. In particular, a multigrid algorithm for solving the [P.sub.n] discretization of the linear Boltzmann equation using a first-order system least-squares (FOSLS) methodology ([17]) is described. The authors are unaware of any published work or existing codes that solve the full FOSLS [P.sub.n] equations in a multilevel fashion.

This paper is an extension of the research reported in [9]. In that paper, a preconditioned conjugate gradient iteration with a block diagonal preconditionerwas used to solve the system of [P.sub.n] equations. Each block of this preconditioner describes only a single diagonal lm-lm coefficient coupling, but defined over the whole spatial domain rather than the full lm-l0m0 coupling. Thus, successively inverting each block of this preconditioner corresponds to successively solving only the lm-lm scalar PDE over the whole spatial domain. The numerical results presented in that paper confirm the non-scalibility of this algorithm with respect to both the number of spherical harmonic terms and the number of spatial nodes used in the full discretization. This non-scalibility reflects this scheme's inability to handle the strong intraand inter-moment coupling.

In this paper, two algorithms that ameloriate some of the moment coupling are presented. One algorithm consists of a multigrid scheme for the spatial coupling of the [P.sub.n] discretization. Here, the unknowns are updated moment-wise first and then spatial-wise. In this way, for a Gauss-Seidel relaxation scheme, at each spatial node in turn, every moment is updated before going to the next spatial node so that the full moment coupling is considered at a fixed node. Physically, since the Boltzmann equation describes the balancing of particle transfer, by solving for all the moments at a spatial node first, this relaxation somewhat enforces a local balancing of particle transfer at each spatial node. One may also use a preconditioned conjugate gradient iteration with a block diagonal preconditioner that describes the full l-l (i.e., moments lm-lm' with -l [less than or equal to] m,m' [less than or equal to] l) intra-moment coupling. Each of the diagonal blocks can be solved with a few cycles of this multigrid scheme restricted only to the l-l moment block. Comparing the convergence rates of this preconditioned conjugate gradient scheme and the above multigrid scheme will expose the relative strength of the intra- and inter-moment coupling in the [P.sub.n] equations.

A system projection multilevel algorithm using the above Gauss-Seidel smoother performs well for parameter regimes that are thick and substantially absorptive. In the thick, low absorptive regime, or so-called region 3, the 1-1 moment system resembles a time-dependent incompressible elasticity equation. The problems with this latter equation are well known: e.g., possible finite element locking and problematic divergence-free near-nullspace components that impede standard multigrid performance ([3], [11], [21], [22]). For this incompressible equation, higher-order or Raviart-Thomas finite element spaces are often used to eliminate locking. These spaces also may induce discrete Helmholtz decompositions, which in turn, may implicitly improve multigrid performance ([3], [11], [21], [22]). But a closer look at the 1-1 system reveals that the corresponding system operator behaves more like (I - [nabla][nabla]*). Locking even for low-order conforming finite elements now does not appear to be as severe as in the incompressible elasticity case. However, divergence-free error components are still problematic for standard multigrid solvers. These error components can be highly oscillatory yet unaffected by standard nodal smoothers. But since these components are essentially local circulations ([7], [21]), effective smoothers exist. In particular, multiplicative/ additive Schwarz smoothers that simultaneously update all unknowns in the support of these local circulations can effectively damp out these error components. We will use these smoothers in the multigrid solver.

This paper proceeds as follows. In section 2, a brief presentation of the notation and functional setting used throughout this paper is given. In section 3, a summary of the FOSLS theory developed in [17] for the mono-energetic, isotropic scattering Boltzman equation is reviewed. This theory shows that locally away from the material interfaces, by appropriately scaling the system of PDE's, the second-order moment coupling essentially describes the whole coupled system of PDE's. This fact will be used to develop our numerical schemes. In section 4, the [P.sub.n] - h finite element discretization for the FOSLS formulation is developed. The system of PDE's is explicitly described, and from this description, one can observe the difficulties in region 3. In section 5, a multigrid scheme is described for the mono-energetic, isotropic scattering case. Multigrid components (relaxation and coarse grid correction), and methods of homogenization of the fine grid material and scaling coefficients will be examined. In subsection 5.1, an algorithmic extension to the full multi-group, anisotropic scattering case will be given. Section 6 presents some numerical results. Computational scaling studies for the mono-energetic, isotropic scattering case will be presented for both the multigrid and preconditioned conjugate gradient schemes. For the multigrid scheme, these results show good scalibility with respect to the number of spatial nodes and linear growth with respect to the number of moments. For the preconditioned conjugate gradient scheme, these results show mild non-scalibility with respect to the number of spatial nodes and linear growth with respect to the number of moments. This difference signals a spatially smooth inter-moment coupling error mode that is not handled by the latter scheme. Section 6 also will present multigrid and discretization convergence results for region 3 problems and for a full multi-group, anisotropic scattering problem. These latter results, together with the analyis in Section 4, indicate that locking may not be severe for realistic Boltzmann transport problems.

2. Notation. We briefly present some of the notation and functional setting used throughout this paper. First, for any non-negative integer s; let [H.sup.s](R) denote the usual Sobolev space of order s defined over R and with norm denoted by [[parallel]*[parallel].sub.s,R] (cf. [1]). For s = 0; the [L.sup.2](R) is denoted by [[parallel]*[parallel].sub.R]. When it is obvious that the norm is defined over R, subscript R will be omitted. Occasionally, we will have need of a general Hilbert space X. Its norm will be denoted by [[parallel]*[parallel].sub.X].

Further notations and definitions are needed for the FOSLS functional. This functional involves an [L.sup.2] term defined over R x [S.sup.2]: We denote this norm by [[parallel]*[parallel].sub.R];: This functional is also defined over the Sobolev space

V := {v [member of] [L.sup.2](R x [S.sup.2]) : [([S.sup.-1] OMEGA] * [nabla]v [OMEGA] * [nabla]v).sub.R,[OMEGA]] + [(Tv, v).sub.R,[OMEGA] < [infinity]}

with norms

[[parallel]v[parallel].sup.2.sub.V] := [([S.sup.-1] * [nabla]v, OMEGA] [nabla]v).sub.R,[OMEGA]] + [(Tv, v).sub.R,[OMEGA]]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here, [S.sup.-1] and T are linear operators that essentially act on the angular variable of a function [psi](x,[OMEGA]).

To analyze the [P.sub.n] equations, we will derive several relations involving the normalized spherical harmonics

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [theta] and [psi] are related to [OMEGA] by [OMEGA] = (sin[theta] cos [psi], sin[theta] sin[psi], cos[psi]), and where [p.sub.l,m] is the lm'th associated Legendre polynomial ([15]). To notationally simplify these derivations, we will use the Dirac bra-ket notation ([20]). Considering the linear vector space generated by [{[Y.sub.lm]}.sub.lm], vector element [Y.sub.lm] will be denoted by

|lm > the ket vector.

Its dual vector is denoted by

< lm| the bra vector.

Given a linear operator A acting on ket |lm >, the [L.sup.2]([OMEGA]) inner product between A|lm > and < l'm'j is denoted

< l'm'[absolute value of A] lm >.

Finally, since [{[Y.sub.lm]}.sub.lm] forms a complete orthonormal set, we note that the completeness Property

I = [summation over (lm)] [absolute value of lm >< lm]

holds.

3. Theory. For self-containment, we review some of the existing theory for the FOSLS formulation of (1.1)-(1.2). Except for the scaling operator for anisotropic scattering, which was communicated to us by T. Manteuffel ([18]), these results were derived in ([17]).

Theory for the FOSLS formulation of (1.1)-(1.2) has been developed only for the monoenergetic, isotropic scattering form of the Boltzmann equation. This simplified form of the Boltzmann equation is obtained by assuming the approximate energy separatibility of and taking a truncated Legendre series expansion of [[sigma].sub.s] ([14]):

(i) [psi] (x,[OMEGA],E) [approximately equal to] f(E) [[psi].sup.g] (x,[OMEGA]), [E.sub.g] < E < [E.sub.g+1], g = 1, ..., N, where f is a normalization function and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(ii)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [p.sub.j] is the j'th Legendre polynomial and [Y.sub.jm] is the jm'th spherical harmonic ([14], [15], [20]).

A simple calculation then shows that (1.1) becomes a system PDE for the group fluxes [[psi].sup.g](x, [OMEGA]) with matrix operator

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here, the superscripts in the cross-section coefficients denote the g - g' energy group coupling, and [P.sub.jm]is the spherical harmonic projection onto [Y.sub.jm]: In particlar, for a single energy group and isotropic scattering ([[sigma].sub.s,j] = 0, j = 1, ..., M), supressing the super/subscripts, the boundary value problem is

(3.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where the scattering term is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For simplicity, we will assume that R is of unit diameter.

In the FOSLS formulation of (3.1), the Boltzmann operator is rewritten with the absorption cross-section

(3.2) L : = [OMEGA] * [nabla] + [[sigma].sub.t] (I - P) + [[sigma].sub.a] P = [OMEGA] * [nabla] + T,

where T := [[sigma].sub.t] (I - P) + [[sigma].sub.a]P. Now introducing the scaling operator

(3.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with inverse

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

the space-angle FOSLS formulation is to minimize the scaled least-squares functional

(3.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

over an appropriate Sobolev space. Note that because of the boundary integral in the least-squares functional, the inflow boundary condition need not be enforced on this Sobolev space.

[FIGURE 3.1 OMITTED]

The appropriate Sobolev space is V: It was shown in [17] that F is equivalent to the [V.sub.1]-norm over V. Thus, functional

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is equivalent to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

That is, the first-order terms [([S.sup.-1]T[psi],[OMEGA] * [nabla][psi]).sub.R,[OMEGA]] and [([S.sup.-1][OMEGA] * [nabla][psi]T[psi]).sub.R,[OMEGA]] are majorized by the second-order term [([S.sup.-1][OMEGA] * [nabla][psi] [OMEGA] * [nabla][psi]).sub.R,[OMEGA]]

Minimizing F over V is effectively solving the variational equation

(3.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for all v [member of] V: Because of the norm equivalence, one essentially needs to develop an effective solver or preconditioner for the discrete system corresponding to the bilinear form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

A scalable solution method for the minimization of the least-squares functional will require a scalable solver for this latter system.

For the multi-group, anisotropic scattering case, the FOSLS formulation is generalized by using the scaling operator

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

in (3.4) ([18]).

4. Spherical Harmonic-h ([P.sub.n] - h) Finite Element Discretization. Two of the advantages of a FOSLS formulation are that it leads to symmetric positive-definite linear systems, and that it is endowed with a computable a posteriori error measure ([8]). For the Boltzmann equation, the symmetric positive-definiteness allows such efficient linear system solvers as multigrid and preconditioned conjugate gradient schemes to be used on the [P.sub.n] discretization of the FOSLS variational form, and the a posteriori error measure leads to good, simple local grid refinement strategies. Indeed, standard Galerkin [P.sub.n] discretizations of the Boltzmann equation lead to non-symmetric linear systems that are difficult to solve efficiently, and standard Galerkin [P.sub.n] discretizations do not naturally lead to simple computable error measures. There are efficient Petrov-Galerkin formulations of the [S.sub.n] discretization, but this discretization suffers from the ray effect in the thin region and in thick regions when the source is close to the points of obse[nabla]vation (e.g., points along the boundary). Nevertheless, the [P.sub.n] FOSLS method is not immuned from problems itself, as will be shown later. But even with these difficulties, the [P.sub.n] FOSLS method is a scheme that can handle all parameter regimes, with the above attractive FOSLS features.

The [P.sub.n] discretization consists of taking a truncated spherical harmonic expansion of the angular flux:

(4.1) [psi](x, [OMEGA]) [approximately equal to] [[PSI].sub.N](x, [OMEGA]) = [N.summation over (l=0)] [l.summation over (m=-l)] [[psi].sub.lm](x)[Y.sub.lm]([OMEGA]). (4.1)

The [[psi].sub.lm]'s are the moments or generalized Fourier coefficients. We will consider only the mono-energetic, isotropic scattering problem. Substituting [[psi].sub.N] into bilinear form a(*, *); and testing it against v(x)[Y.sub.l'm'](OMEGA]), l' = 0, ..., N and m' = -l', ..., l', a semi-discretization is obtained. Now because P acts only on the angular variable, we have

[I - P][[psi].sub.l,m] (x)[Y.sub.lm]([OEMGA]) = [[psi].sub.l,m](x)[(I - P)[Y.sub.lm](OEMGA])],

and so, T and [S.sup.-1] simply project the zero and non-zero moments differently. Moreover, because of the V1 norm equivalence, to analyze this semi-discrete system, only the zeroth-order and second-order terms need to be considered.

For the zeroth-order term, since T acts only on the angular variable, we have

(4.2) [(T[[psi].sub.N], [vY.sub.l',m']).sub.R,[OMEGA]] = [summation over (lm)] < [Y.sub.lm] [absolute value of T][Y.sub.lm] > [([[psi].sub.lm], v).sub.R],

where < *[absolute value of A]* > is the bra-ket notation for the angular inner product with operator A acting on ket |* >; and where (*, *)R is the spatial inner product over R: For the second-order term, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(4.3) = [3.summation over (i=1)] [3.summation over (j=1)] [summation over (lm)] < [Y.sub.lm] [absolute value of [Q.sub.i][S.sup.-1] [[OMEGA].sub.j]][Y.sub.l'm'] > [([[psi].sub.lm,i]).sub.R.

Here, i and j denote spatial differentiation. Note that the sparsity pattern of the second-order stiffness matrix depends on both the spatial differentiation operators and the moment coupling created through

< [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]j[[OMEGA].sub.j]] [Y.sub.l'm'] >.

Consider the diagonal lm-lm element of < [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]j[[OMEGA].sub.j]] [Y.sub.l'm']. This element can be viewed as a full 3x3 tensor describing the "diffusive" interaction of moment [[psi].sub.l,m] with itself. Viewed this way, < [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]j[[OMEGA].sub.j]] [Y.sub.l'm'] > is a [(N + 1).sup.2] x [(N + 1).sup.2] block matrix of 3 x 3 tensors with each lm-l'm' tensor describing the spatial anisotropy coupling between [[psi].sub.l,m] and [[psi].sub.l'm'] : Fortunately, this block matrix of tensor has some structure. To see this, the completeness property

[[summation over (l"m")] [absolute value of [Y.sub.l"m"] >< [Y.sub.l"m"]] = I

of spherical harmonics ([20]) is needed. Applying this identity twice, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

But [S.sup.-1] simply scales the ket [Y.sub.l"m"] > by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(cf., equation (3.3)). The orthogonality of spherical harmonics then implies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and so,

(4.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [c.cub.l"m"] = [c.sub.1] if l [not equal to] 0 and [c.sub.00] = [c.sub.2]. Moreover, it can be shown that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

only when [??] = [??] [+ or -] 1. Thus, < [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]j[[OMEGA].sub.j]] [Y.sub.l'm'] > is a weighted product of two block tridiagonal matrices, which implies that it is block pentadiagonal. In fact, further properties of spherical harmonics show that this block pentadiagonal matrix is nonzero only when l = l' [+ or -] 2. Hence, the even and odd moments decouple in the second-order term. Figure 4.1 illustrates this structure for N = 4; where the 'x' blocks are the non-zero block entries and the small diagonal blocks are the lm-lm 3 x 3 tensors. Finer structure of this block pentadiagonal matrix can be found by using additional properties of the spherical harmonics (with respect to m).

[FIGURE 4.1 OMITTED]

Now, assume a finite element discretization of the spatial component. Using the test function [b.sub.[beta]](x)[Y.sub.l'm']([OMEGA]), where [{[b.sub.[beta]}.sub.[beta] is a basis set for the spatial finite element space, the second-order term becomes

(4.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here, if [b.sub.[alpha]] at spatial node [alpha] is the standard hat function, then [[psi].sub.l,m,[alpha]] is the value of the lm moment at that node. Using the structure of < [Y.sub.lm]ji[S.sup.-1]j j[Y.sub.l'm'] >; the structure of the full discrete second-order term is also block pentadiagonal with total size M(N + 1)2 x M[(N + 1).sup.2] where M is the number of spatial nodes. Corresponding to each 3 x 3 tensor of < [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]][Y.sub.l'm'] > is an M x M submatrix describing the discretized spatial coupling of moment lm to l'm': Alternatively, assuming R to be tessalated into tetrahedral elements and assuming {[b.sub.[alpha]]} to be trilinear finite elements, the second-order term can be re-ordered to have a 27 block stripe structure corresponding to the 27 point stencil of the spatial differentiation operator. Each block on any stripe gives the < [Y.sub.lm] [absolute value of [[OMEGA].sub.i][S.sup.-1]][[OMEGA].sub.j]] [Y.sub.l'm'] > coupling at a spatial point. Such ordering is better for computation.

Since bilinear form b(*, *) also contains (4.3), an effective solver or preconditioner must be able to efficiently invert this complicated linear system. However, for some parameter regimes, a more sophisticated spatial discretization and a non-standard multigrid scheme may be needed. A problem arises because scaling coefficients [c.sub.1] and [c.sub.2] may differ by orders of magnitude. Thus, on one hand, the scaling operator leads to the correct asymptotic limiting equation in these regimes ([16], [17]), but on the other, a complicated discretization and multigrid scheme may be required.

To see the scaling problem, from (4.4), we see that only the l"-m" =0-0 column and row of < [Y.sub.lm] [absolute value of [[OMEGA].sub.i]][Y.sub.l"m"] > and < [Y.sub.l"m"] [absolute value of [[OMEGA].sub.j]] [Y.sub.l'm'] >; respectively, are scaled by [square root of [c.sub.2]]. All other rows and columns are scaled by [square root of [c.sub.1]]. Because < [Y.sub.lm] [absolute value of [[OMEGA].sub.i]][Y.sub.l"m"] > and < [Y.sub.l"m"] jj j[Y.sub.l'm'] > are block tridiagonal and because < [Y.sub.lm]ji[S.sup.-1]j j[Y.sub.l'm'] > is nonzero only when l' = l[+ or -]2, then only diagonal moment blocks l-l =0-0 and l-l =1-1 of the second-order term can contain [c.sub.2] scaled terms. In particular, for regions 1, 2, and 3 respectively, these blocks are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

([17]). Block l-l =0-0 is Laplacian, so poses no problems. However, block l-l =1-1 contains the grad-div operator [nabla][nabla]*, whose nullspace consists of divergence-free functions. In particular, in region 3 and region [member of] when [[sigma].sub.t] [much greater than] 1 and [[sigma].sub.a] [approximately equal to] 1/[[alpha].sub.t] since the 1-1 block is majorized by the grad-div operator, divergence-free components create problems for standard iterative solvers.

These problems remain even when both the zeroth and second-order terms of b(*, *) are considered. Now the diagonal blocks in regions [member of] and 3 are

(4.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

(4.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

These divergence-free components are approximate eigenfunctions of the l-l =1-1 block corresponding to eigenvalue [[sigma].sub.t].

To further expose the difficulties of the 1-1 block, one can compare it to a semi-discrete form of a time-dependent incompressible elasticity equation:

(4.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with c = O(1/[DELTA]t) and [lambda] [much greater than 1. Not only do divergence-free error components complicate the system solver, but also the locking effect degrades uniform discretization convergence with respect to [lambda] ([4], [5], [6]). Assuming conforming piecewise linear finite elements and full [H.sup.2]-regularity, this non-uniformity can be anticipated from the usual error estimate

(4.9) [[parallel]u - [u.sup.h][parallel].sub.1] [less than or equal to] C([lambda])h[[parallel]u[parallel].sub.2]

with constant C([lambda]) dependent on [lambda]. But to examine locking more precisely, let a0 be a continuous, coercive bilinear form defined over a Hilbert space X (i.e., [alpha][[parallel]v[parallel].sup.2.sub.X] [less than or equal to] [a.sub.0](v, v) [less than or equal to][beta][[parallel]v[parallel].sub.2.sub.X] [for all]v [member of] X), let B : X [right arrow] [L.sup.2] be a continuous mapping, and consider the weak equation

[a.sub.0]([u.sub.[lambda]], v) + [lambda]([Bu.sub.[lambda],Bv) = (f, v) 8v [member of] X.

Locking occurs over the finite element space [X.sup.h] if

(4.10) [X.sup.h] [intersection] N(B) = {0}

and if

[[parallel][v.sub.h][parallel].sub.X] [less than or equal to] C(h)[parallel][Bv.sup.h[parallel] [for all][v.sup.h] [member of] [X.sup.h]. (4.11)

For (4.8), X = [H.sup.2](R)[intersection][H.sup.1.sub.0] (R),

[a.sub.0]([u.sub.[lambda]], v) = ([nabla][u.sub.[lambda], [nabla]v) + (cu, v),

and B = [nabla]*. Conditions (4.10)-(4.11) guarantee that for any fixed h and sufficiently large [lambda], there is a v [member of] [H.sup.2](R)[intersection][H.sup.1.sub.0] (R) satisfying

[cI ... [[nabla].bar] [lambda] [nabla][nabla]*] v = [f.sub.v]

such that the relative error is bounded below by a constant independent of h:

(4.12) C [less than or equal to] [[absolute value of v - [v.sup.h]].sub.1]/[parallel][f.sub.v][parallel]

Following the technique in [6], one can show that v is divergence-free with [[parallel]v[parallel].sub.1] > 0. Hence, since the solution of (4.8) becomes more incompressible as a function of [lambda], discretization convergence will not be uniform in [lambda].

Note that for a divergence-free function v with nonzero [H.sup.1]-norm, [f.sub.v] is nonzero and independent of [lambda]. Using (4.9) and (4.12), we have

(4.13) C [less than or equal to][[absolute value of v - [v.sup.h]].sub.1]/[parallel][f.sub.v][parallel][less than or equal to] [C.sub.2]C([lambda])h[[parallel]v[parallel].sub.2] [parallel][f.sub.v][parallel]

where [C.sub.2] independent of [lambda]. From (4.13), we see that the severity of locking can be reduced if [parallel][f.sub.v][parallel] depends similarly on [lambda] as C([lambda]) does.

Now consider the 1-1 block in (4.7) or (4.6) when [[sigma].sub.t] [approximately equal to] 1/[[sigma].sub.a] [much greater than] 1. In either case, one has the approximate form

(4.14) [[c.sub.1][[sigma].sup.2.sub.t] I - [[nabla].bar] - [c.sub.2][[sigma].sup.2.sub.t] [nabla][nabla]*]u = [[[sigma].sup.2.sub.t] ([c.sub.1]I - [c.sub.2][nabla][nabla]*) - [[nabla].bar]]u = [[sigma].sub.t]f := [f.sub.u].

Since a FOSLS [P.sub.n] formulation of (3.1) leads to a "displacement" formulation of (4.14), [a.sub.0](u; v) = ([nabla]u,[nabla]v) and Bu = [([c.sub.1]u, [c.sub.2][nabla] * u).sup.t].

([nabla]u,[nabla]v) + [[sigma].sup.2.sub.t] [([c.sub.1]u, v) + ([c.sub.2][nabla] * u,[nabla]* v)] = ([f.sub.u], v).

Also, using linear finite elements, we have (4.9) with

C([[sigma].sup.2.sub.t]) [approximately equal to] [[sigma].sup.2.sub.t]. (4.15)

Clearly, (Bu;Bv) corresponds to a scaled H(div) norm of u; and hence N(B) = {0}, and consequently [f.sub.u] must depend on [[sigma].sup.2.sub.t]. Indeed, from (3.5), the righthand-side associated with the 1-1 block is

f = [[sigma].sub.t]/3 [nabla][q.sub.00] + lower order [[sigma].sub.t] terms,

where [q.sub.00] is the zero'th moment of the external source q: For neutronic problems, [parallel][nabla][q.sub.00][parallel] = O(1) ([13], [16]) in the region 3. Thus,

[f.sub.u] = [C.sub.3][[sigma].sup.2.sub.t][??]. (4.16)

Substituting (4.15)-(4.16) into (4.13), we have

(4.17) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

But, even though (4.14) and (4.16) imply that u approximately satisfies

[[c.sub.1]I - [c.sub.2][nabla][nabla]*]u = [C.sub.3][??],

the upper bound in (4.17) does not imply uniform convergence. Consider the case when u = ([u.sup.1], [u.sup.2], [u.sup.3]) satisifies any of the following conditions:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For this u; [[parallel]u[parallel].sub.2] can be O([[sigma].sup.2.sub.t]).

5. A Multigrid Algorithm. The solution procedure involves minimizing F over an appropriate subspace of V: To accomplish this, a Rayleigh-Ritz finite element method is employed in the spatial discretization. Let [T.sub.h] be a triangulation of domain R into elements of maximal length h = max {diam(K) : K [member of] [T.sub.h]}, and let [V.sup.h] be a finite dimensional subspace of V having the approximation property

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for all v [member of] [[H.sup.2](R) x [L.sup.2]([S.sup.2])]. The [P.sub.n] - h finite element space is then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The discrete fine grid minimization problem is

* Find [[psi].sup.h.sub.N] [member of] V[[psi].sup.h.sub.N] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Equivalently, the discrete problem is

* Find [[psi].sup.h.sub.N] [member of] V[[psi].sup.h.sub.N] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for all [v.sup.h.sub.N] [member of] V[[psi].sup.h.sub.N].

(Although one actually solves for the moments [[psi].sup.h.sub.lm], we will not make this notational distinction in the algorithm.)

A standard projection multilevel scheme for solving either discrete problem is fairly straightforward. Let

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

be a conforming sequence of coarsenings of triangulation [T.sub.h]; let

[V.sup.m.sub.N] [subset] [V.sup.m-1.sub.N] [subset] ... [subset] [V.sup.2.sub.N] [subset] [V.sup.1.sub.N]: [V.sup.h.sub.N]

be a set of nested coarse grid subspaces of [V.sup.1.sub.N] N; the finest subspace, and let

[B.sup.j] = {[b.sup.j.sub.v,lm]}

be a suitable (generally local in space) basis set for V j N: (For example, [b.sup.j.sub.v,lm] = [b.sup.j.sub.v][Y.sub.lm], where [b.sup.j.sub.v] is the standard piecewise linear hat function.) Given an initial approximation [[psi].sup.h.sub.N] on level j, the level j relaxation sweep consists of the following cycle

* for each v = 1, 2, ..., [M.sub.j], ([M.sub.j] being the number of spatial nodes on grid j) for each lm; 0 [less than or equal to] l [less than or equal to] N, -l [less than or equal to] m [less than or equal to] l;

[[psi].sup.h.sub.N] [left arrow][[psi].sup.h.sub.N] + [alpha][b.sup.j.sub.v,lm],

where [alpha] is chosen to minimize

(5.1) F ([[psi].sup.h.sub.N] + [alpha][b.sup.j.sub.v,lm], q, g). (5.1)

Since F ([[psi].sup.h.sub.N] + [b.sup.j.sub.v,lm]; q, g) is a quadratic function in [alpha], this local minimization process is simple, and is, in fact, a Gauss-Seidel iteration. Moreover, note that the loops range over the moments first so that all moments are updated at a fixed spatial node before going onto the next spatial node. Note also that the search direction may involve more than one element of [B.sup.j]: For such subspace iteration, denoting the direction by [b.sup.j.sub.v], one then needs to find [alpha] that minimizes

F ([[psi].sup.h.sub.N] + [alpha][b.sup.j.sub.v];q, g).

A good choice for [b.sup.j.sub.v] is the subset

{[b.sup.j.sub.v] [Y.sub.lm] : 0 [less than or equal to] l [less than or equal to] N, -l [less than or equal to] m [less than or equal to] l, v fixed}

which results in a block Gauss-Seidel iteration that simultaneously updates all the moments at node v.

Now, given a fine grid approximation [[psi].sup.h.sub.N] on level 1, the level [member of] coarse grid problem is to find a correction [psi].sup.2h.sub.N] such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Having obtained this correction, [[psi].sup.h.sub.N] is updated according to

[[psi].sup.h.sub.N] [left arrow] [[psi].sup.h.sub.N] + [[psi].sup.2h.sub.N].

Applying this procedure recursively yields a multilevel scheme in the usual way.

Away from the thick diffusive regime and for homogeneous material, this multigrid algorithm has the usual optimal multigrid efficiency. But for inhomogeneous materials even in the thin diffusive regime, when the material and scaling coefficients have fine-scale structure, the computational efficiency of this algorithm degrades; i.e., to preserve these fine-scale structures on the coarse grid, fine-scale computation is needed on the coarser levels. For a matrix-free implementation, the total cost of this fine-scale coarse grid computation is not nominal. For this reason, these coefficients should be homogenized to coarse-scale resolution and then used on the coarse grid. Coarse grid calculations now can be performed with coarse-scale computation.

Assuming that the coefficient jumps are grid-aligned on the finest grid, a simple homogenization method that can be applied is an averaging process. For example, the material and scaling coefficients can be arithmetically and harmonically averaged, respectively ([2], [10], [19], [23]): if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are the coefficients in disjoint elements {[[mu].sup.h.sub.j]}.sup.r.sub.j=1] on grid h; and if the coarse grid element [[gamma].sup.2h] is composed of the agglomerate

[[gamma].sup.2h] = [[union].sup.r.sub.j=1] [[mu].sup.h.sub.j],

then

(5.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(5.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(5.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here, harmonic averaging is more suitable for the scaling coefficients because they contribute to the diffusion tensors ([10]).

But one should note that using homogenized coefficients on coarser grids leads to a violation of a projection multilevel principle. Each coarse grid problem corresponds to a different minimization problem (i.e., non-nested bilinear forms). Thus, a coarse grid correction now is not an optimal subspace correction to the fine grid problem. In particular, for rapidly varying coefficients, it is possible for a coarse grid correction from a very coarse level to completely pollute the fine grid solution. Thus, more sophisticated homogenization schemes may be needed for complicated physics.

A sophisticated technique is also needed for region 3. Recall that the algebraically smooth error in this regime is predominantly controlled by the divergence-free error components of the 1-1 block. Since these components can be geometrically oscillatory, the smoother must eliminate the highly oscillatory ones. But since divergence-free functions are intrinsically vector quantities that are represented well only over the union of several elements, a point/nodal smoother is not sufficient. Rather, following [3], [11], and [21], an additive/multiplicative Schwarz smoother should be used. This block smoother must resolve the smallest representable circulation, or local divergence-free functions. Hence, the directions [b.sup.j.sub.v] for this smoother have small spatial suppport so that the block solvers in a relaxation sweep involve only small linear systems for only the 1-1 block.

We now have a well-defined multigrid algorithm for all parameter regimes and for inhomogeneous material. Using an appropriate direction in the relaxation, one may handle the local intra- and inter-moment coupling well. One can also restrict this multigrid algorithm to the l-l blocks to produce a block preconditioner. Unlike the preconditioner described in [9], this preconditioner considers the full intra-moment coupling. By comparing the performances of this preconditioned conjugate gradient method and the multigrid algorithm defined over all lm-l'm' coupling, one may be able to deduce the nature of the intra- and inter-moment coupling in the [P.sub.n] equations.

5.1. An Algorithm for the Multi-Group, Anisotropic Scattering Equation. In section 3, we saw that (1.1) can be semi-discretized into a block system with mono-energetic, anisotropic scattering equations along the diagonal. Each of these diagonal equations can be solved with either the multigrid or preconditioned conjugate gradient algorithms described in the previous section. Thus, one obtains a scheme for solving the full multi-group, anisotropic scattering Boltzmann equation by using an outer block Gauss-Seidel iteration over the groups. For the important down-scattering problems in photon/neutron applications (i.e., particles can be scattered only into lower energy groups), this block iteration becomes a back solve. For general scattering, since the differential operators majorize the integral projection operators, the block system is diagonally dominant, and so, this block Gauss-Seidel iteration should perform well.

6. Numerical Experiments. The above [P.sub.n] - h finite element discretization of the FOSLS formulation was implemented. Angle integrals involving spherical harmonics were computed using analytical formulas, and the spatial moments were discretized with piecewise trilinear functions on rectangular solids. We conducted three sets of experiments. The first set examines the scalability of our code for region 1 and [member of] problems. We consider both scalability with respect to the spatial meshsize, and scalability with respect to the number of processors used. The second set of experiments examines region 3 problems. Since our code does not have a parallel implementation of our multiplicative Schwarz smoother, only scalibility with respect to the spatial meshsize is considered. However, we also examine discretization convergence to illustrate locking-free error for region 3 problems. Finally, the third set of experiments examines a full multi-group, anisotropic scattering problem. Both multigrid convergence rates and discretization error are considered.

i) Regions 1 and [member of] Scaling Studies. The goal in these experiments is to investigate scalability with respect to the number of spatial nodes and spherical harmonics, and scalibility with respect to the number of processors. Only the mono-energetic, isotropic scattering equation was considered since solving the diagonal equations in the semidiscrete multi-group, anisotropic scattering problem is the major task in the block Gauss-Seidel iteration described in Section 5.1. Also, only homogeneous material was considered. Results for the test suite Kobayashi problems ([12]) involving inhomogeneous material with large jumps will be presented in a future paper. Convergence rates for these benchmark problems are very similar to the rates presented here. For the current experiments, the source terms were taken to be zero and the initial guess was random for all moments. V (1; 1) cycles with "nodal moment" Gauss-Seidel relaxation were used in the multigrid and preconditioned conjugate gradient schemes. For the latter scheme, only one cycle was performed in the preconditioning solve. The stopping criterion was

[square root of F([[psi].sup.n];0, 0)] - [square root of F([[psi].sup.n-1],0, 0)]/ [square root of F([[psi].sup.0];0, 0)] < [10.sup.-7].

Results are tabulated in Tables 6.1 and 6.2.

From Table 6.1, we see that the number of iterations for the region 1 case is about constant as h is refined. Thus, in region 1, the multigrid algorithm appears to be scaling well spatially. However, the number of iterations increases linearly as the number of spherical harmonic terms is increased (e.g., 9 iterations for N = 1 but 20 iterations for N = 3). This growth should not be surprising since the size of the system PDE grows quadratically in N: This convergence growth also reveals several properties of the moment coupling. First, the slower but spatially scaled convergence rate for N = 3 indicates that relaxation is not effective on some high frequencies of the system PDE. (If the slowly converging components were smooth, then the convergence rate would not scale with h:) Since nodal Gauss-Seidel takes account of the full coupling at a node, these moment-coupling high frequencies must "spread out" spatially. A block Gauss-Seidel that involves the moments over more spatial nodes may give better scaling.

Note that a smooth frequency coupling may also be creeping in as the number of spherical harmonics is increased, as indicated by the slight increase in iterations as h is refined for N = 6: This smooth frequency coupling is also more pronounced in region [member of] problems, as indicated by the logarithmic growth in the number of iterations as h is refined.

Further features of the moment coupling can be deduced from the preconditioned conjugate gradient results. The overall increase in iteration counts reflects the "strength" of the inter-moment coupling since these couplings are not handled well with a block diagonal preconditioner that considers only the intra-moment couplings. However, the peculiar behaviour for N = 1 in region [member of] is difficult to explain. For N = 1; the moments couple only through the boundary functional. Thus, this boundary coupling may be stronger than expected.

Processor scalability is best observed in region 1 results. As the ratio of unknowns per processor is kept roughly constant, the time taken to perform the same amount of computation should be roughly constant. This can be observed in region 1 for N = 1; 3; where the number of iterations remains constant as h is refined- e.g., for N = 3; with the number of unknowns/processor roughly 70,000, the overall run time is roughly 500 seconds for each refinement.

ii) Region 3. The above multigrid scheme performs poorly; the asymptotic convergence rate approaches 1. Thus, to handle the problematic divergence-free error components, a multiplicative Schwarz smoother is used on the 1-1 moments. The block solves in this smoother simultaneously update the 1-1 moments at the 27 nodes comprising an eight element agglomerate. Table 6.3 tabulates some results. Since the asymptotic convergence rate increases from .58 for N=1 to .68 for N=3, the convergence rate again depends on the number of spherical harmonic terms.

To investigate locking, we consider the boundary value problem

(6.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with exact solution

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and [[sigma].sub.a] = 0:005: Relative [V.sup.1] and [H.sup.1] norms of the error in [[psi].sup.h] and relative [H.sup.1] norm of the error in [([psi].sup.h.sub.1,-1], [psi].sup.h.sub.1,0], [psi].sup.h.sub.1,1]).sup.t] are given in Table 6.4. The theory in [17] guarantees only order h convergence in the [V.sup.1], as confirmed by the results in Table 6.4- as h is halved, the error in the [V.sup.1] norm is halved also. Note that irrespective of the magnitude of [[sigma].sub.t]; as h is halved, the discretization error in ([psi].sup.h.sub.1,-1], [psi].sup.h.sub.1,-0], [psi].sup.h.sub.1,1]).sup.t] in the [H.sup.1] norm decreases about 60%. Since this accuracy improvement is independent of [[sigma].sub.t]; we see locking-free error.

iii) Multi-group, Anisotropic Scattering. The last experiment examines the discretization error for a two-group, anisotropic scattering problem:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This is a down-scattering problem. Table 6.5 shows the multigrid convergence rates for each energy group. Again, these rates reflect spatial scaling, and growth with respect to the number of spherical harmonic terms. Also, from Table 6.6, again as h is halved, the [V.sup.1] norm of the error is also halved. Thus, we see order h discretization accuracy.

7. Conclusion. In this paper, we presented two system multigrid algorithms for solving the anisotropic scattering Boltzmann equation formulated as a FOSLS minimization problem. Used in the inner loop of a block-group Gauss-Seidel iteration, either algorithms gives an efficient solver for many important multi-group, anisotropic scattering Boltzmann equations. Morever, using a multiplicative Schwarz smoother instead of a nodal Gauss-Seidel smoother, we are able to get an efficient algorithm for the region 3 1-1 block. Because of the favorable scaling of this time-depedent incompressible elasticity-like problem, locking seems to be less severe than as in the case of general incompressible elasticity problems. Numerical results demonstrate that locking does not occur even for low-order, conforming finite elements. Other numerical results demonstrate that these new multigrid algorithms scale better than the PCG iteration examined in [9] for region 1 and [member of] problems. However, numerical results also indicate that the higher moments couple through high frequency modes. To handle these error modes better, more sophisticated smoothers need to be designed. This is the topic of current research.

REFERENCES

[1] R. A. ADAMS, Sobolev Spaces, Academic Press, 1975.

[2] R. E. ALCOUFFE, A. BRANDT, J. E. DENDY, AND J. W. PAINTER, The multi-grid method for the diffusion equation with strongly discontinuous coefficients, SIAM J. Sci. Stat. Comput., [member of] (1981), pp.430-454.

[3] D. N. ARNOLD, R. S. FALK, AND R. WINTHER, Preconditioning in H(div) and applications, Math. Comp., 66 (1997), pp. 957-984.

[4] I. BABUSKA AND M. SURI, Locking effects in the finite element approximation of elasticity problems, Numer. Math., 62 (1992), pp.439-463.

[5] D. BRAESS, Finite elements theory, fast solvers, and applications in solid mechanics, Cambridge University Press, Cambridge, 1997.

[6] S. C. BRENNER AND L. R. SCOTT, The Mathematical Theory of Finite Element Methods, Springer-Verlag, New York, 1994.

[7] F. BREZZI AND M. FORTIN, Mixed and Hybrid Finite Element Methods, Springer-Verlag, New York, 1991.

[8] Z. CAI, T. MANTEUFFEL, AND S. McCORMICK, First-order system least squares for partial differential equations: Part II, SIAM J. Numer. Anal., 34 (1997), pp. 425-454.

[9] B. CHANG AND B. LEE, Space-Angle First-Order System Least-Squares (FOSLS) for the Linear Boltzmann Equation, preprint.

[10] J. E. DENDY, Black box multigrid, J. Comp. Phys., 48 (1982), pp.366-386.

[11] R. HIPTMAIR, Multigrid method for H(div) in three dimensions, ETNA, 6 (1997), pp.133-152.

[12] K. KOBAYASHI, N. SUGIMURA, AND Y. NAGAYA, 3D radiation transport benchmarks for simple geometries with void regions, preprint.

[13] E. W. LARSEN, J. E. MOREL, AND W. F. MILLER, Asymptotic solutions of numerical transport problems in optically thick, diffusive regimes, J. Comp. Phys., 69 (1987), pp.283-324.

[14] E. E. LEWIS AND W. F. MILLER, Computational Methods of Neutron Transport, John Wiley, New York, 1984.

[15] R. L. LIBOFF, Introductory Quantum Mechanics, Holden-Day, Inc., San Francisco, 1980.

[16] T. A. MANTEUFFEL AND K. J. RESSEL, Least-squares finite-element solution of the neutron transport equation in diffusive regimes, SIAM J. Numer. Anal., 35 (1998), pp. 806-835.

[17] T. A. MANTEUFFEL, K. J. RESSEL, AND G. STARKE, A boundary functional for the least-squares finite element solution of neutron transport problems, SIAM J. Numer. Anal., 37 (2000), pp.556-586.

[18] T. A. MANTEUFFEL, private communication.

[19] J. D. MOULTON, J. E. DENDY, JR., AND J. M. HYMAN, The black box multigrid numerical homogenization algorithm, J. Comp. Phys., 142 (1998), pp.80-108.

[20] J. J. SAKURAI, Modern Quantum Mechanics, Addison-Wesley, 1994.

[21] J. SCHOBERL, Multigrid methods for a parameter dependent problem in primal variables, Numer. Math., 84 (1999), pp.97-119.

[22] P. S. VASSILEVSKI AND J. WANG, Multilevel iterative methods for mixed finite element discretizations of elliptic problems, Numer. Math., 63 (1992), pp. 503-520.

[23] P. M. D. ZEEUW, Matrix-dependent prolongations and restrictions in a blackbox multigrid solver, J. Comp. Appl. Math., 33 (1990), pp.1-27.

* Received May 18, 2001. Accepted for publication September 12, 2001. Recommended by Joel Dendy.

B. CHANG AND B. LEE ([dagger])

([dagger]) Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, P.O. Box 808 L-561, Livermore, CA 94551. email: chang1@llnl.gov, lee123@llnl.gov. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract no. W-7405-Eng-48.
TABLE 6.1

V-cycle results- region [[sigma].sub.t] = .1 and [[sigma].sub.a] = .05,
region 2: [[sigma].sub.t] = 10 and [[sigma].sub.a] = 5.

Region N h # Procs # Unks/Proc Iter Time

1 1 1/32 1 143,748 9 279
 1 1/64 8 137,313 9 261
 1 1/128 64 134,168 9 267
 1 1/256 512 132,614 9 270
 3 1/16 1 78,608 20 511
 3 1/32 8 71,874 20 472
 3 1/64 64 68,656 20 495
 3 1/128 512 67,084 20 534
 6 1/16 8 30,092 41 1,493
 6 1/32 64 27,514 43 1,607
 6 1/64 512 26,282 45 1,743
 6 1/128 512 205,444 47 10,191

2 1 1/32 1 143,748 10 313
 1 1/64 8 137,313 10 294
 1 1/128 64 134,168 10 296
 1 1/256 512 132,614 10 302
 3 1/16 1 78,608 14 359
 3 1/32 8 71,874 15 356
 3 1/64 64 68,656 17 413
 3 1/128 512 67,084 19 467
 6 1/16 8 30,092 19 698
 6 1/32 64 27,514 24 903
 6 1/64 512 26,282 29 1,142
 6 1/128 512 205,444 30 6,320

TABLE 6.2

Pcg with 1 V (1,1) preconditioning- region [[sigma].sub.t] = .1 and
[[sigma].sub.a] = .05, region 2: [[sigma].sub.t] = 10 and
[[sigma].sub.a] = 5.

Region N h # Procs # Unks/Proc Iter

1 1 1/32 1 143,748 17
 1 1/64 8 137,313 18
 1 1/128 64 134,168 18
 1 1/256 512 132,614 18
 3 1/16 1 78,608 31
 3 1/32 8 71,874 32
 3 1/64 64 68,656 33
 3 1/128 512 67,084 34
 6 1/16 8 30,092 59
 6 1/32 64 27,514 60
 6 1/64 512 26,282 63
 6 1/128 512 205,444 65

2 1 1/32 1 143,748 42
 1 1/64 8 137,313 54
 1 1/128 64 134,168 60
 1 1/256 512 132,614 62
 3 1/16 1 78,608 19
 3 1/32 8 71,874 23
 3 1/64 64 68,656 26
 3 1/128 512 67,084 29
 6 1/16 8 30,092 29
 6 1/32 64 27,514 41
 6 1/64 512 26,282 52
 6 1/128 512 205,444 53

TABLE 6.3

V (1, 1) convergence rates for region 3: [[sigma].sub.t] = 10 and
[[sigma].sub.a] = 0.001.

N h Rate

1 1/16 0.36
1 1/32 0.56
1 1/64 0.56
3 1/16 0.48
3 1/32 0.64
3 1/64 0.68

TABLE 6.4

Locking-free discretization error for region 3 problems:
[[sigma].sub.t] = 10, 50, 100, and [[sigma].sub.a] = 0.005. [V.sup.1]
is the [V.sup.1] norm of the total error; [H.sup.1] is the [H.sup.1]
norm of the total error, and [H.sup.1]:1-1 is the [H.sup.1] norm of the
error in [([[phi].sup.h.sub.1, -1], [[phi].sup.h.sub.1, 0],
[[phi].sup.h.sub.1, 1]).sup.t]

[[sigma].sub.t] h [V.sup.1] [H.sup.1] [H.sup.1]:1-1

10 1/16 1.11e-1 1.52e-1 1.61e-1
 1/32 5.49e-2 6.33e-2 6.51e-2
 1/64 2.74e-2 2.93e-2 2.96e-2

50 1/16 1.09e-1 1.77e-1 1.92e-1
 1/32 5.45e-2 7.05e-2 7.43e-2
 1/64 2.73e-2 3.15e-2 3.24e-2

100 1/16 1.09e-1 1.82e-1 1.98e-1
 1/32 5.44e-2 7.43e-2 7.90e-2
 1/64 2.73e-2 3.39e-2 3.55e-2

TABLE 6.5

V(1, 1) convergence rate for multigroup, anisotropic
scattering problem.

Group N h Iter

1 3 1/32 13
 3 1/64 13
 3 1/128 13
 6 1/32 16
 6 1/64 18
 6 1/128 18

2 3 1/32 13
 3 1/64 13
 3 1/128 13
 6 1/32 16
 6 1/64 18
 6 1/128 18

TABLE 6.6

Relative [V.sup.1] norm of error for multigroup, anisotropic scattering
problem.

Group h N = 3 N = 6

1 1/32 3.88e-2 3.37e-2
 1/64 1.89e-2 1.63e-2
 1/128 9.39e-3 8.08e-3

2 1/32 3.88e-2 3.38e-2
 1/64 1.89e-2 1.63e-2
 1/128 9.39e-3 8.08e-3
COPYRIGHT 2003 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Chang, B.; Lee, B.
Publication:Electronic Transactions on Numerical Analysis
Date:Jan 1, 2003
Words:8907
Previous Article:A robust and parallel multigrid method for convection diffusion equations.
Next Article:Large-Eddy simulation and multigrid methods.


Related Articles
A multigrid method for distributed parameter estimation problems.
Advancing analysis capabilities in ANSYS through solver technology.
A robust and parallel multigrid method for convection diffusion equations.
Large-Eddy simulation and multigrid methods.
On multigrid for linear complementarity problems with application to American-style options.
Comparison of non-linear solvers for the solution of radiation transport equations.
Multigrid preconditioning and Toeplitz matrices.
Iterative methods for solving the dual formulation arising from image restoration.
Stochastic dynamics and Boltzmann hierarchy.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters