Printer Friendly

The [MR.sup.3]-GK algorithm for the bidiagonal SVD.

1. Introduction. The singular value decomposition (SVD) is one of the most fundamental and powerful decompositions in numerical linear algebra. This is partly due to generality, since every complex rectangular matrix has a SVD, but also to versatility, because many problems can be cast in terms of the SVD of a certain related matrix. Applications range from pure theory to image processing.

The principal algorithm for computing the SVD of an arbitrary dense complex rectangular matrix is reduction to real bidiagonal form using unitary similarity transformations, followed by computing the SVD of the obtained bidiagonal matrix. The method to do the reduction was pioneered by Golub and Kahan [18]; later improvements include reorganization to do most of the work within Blas 3 calls [1, 2, 27].

We call the problem to compute the singular value decomposition of a bidiagonal matrix bsvd. There is a long tradition of solving singular value problems by casting them into related symmetric eigenproblems. For bsvd this leads to a variety of tridiagonal symmetric eigenproblems (tseps). Several methods are available for solving the tsep, including QR iteration [15, 16], bisection and inverse iteration (BI), divide and conquer [3, 22], and, most recently, the algorithm of multiple relatively robust representations [6, 7, 8], in short MRRR or [MR.sup.3]. The latter offers to compute k eigenpairs [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], of a symmetric tridiagonal matrix T [member of][R.sup.nxn] in (optimal) time O(kn), and thus it is an order of magnitude faster than BI. In addition, [MR.sup.3] requires no communication for Gram-Schmidt reorthogonalization, which opens better possibilities for parallelization. It is therefore natural and tempting to solve the bsvd problem using the [MR.sup.3] algorithm, to benefit from its many desirable features. How to do so stably and efficiently is the focus of this paper.

The remainder of the paper is organized as follows. In Section 2 we briefly review the [MR.sup.3] algorithm for the tridiagonal symmetric eigenproblem and the requirements for its correctness. The reader will need some familiarity with the core [MR.sup.3] algorithm, as described in Algorithm 2.1 and Figure 2.1, to follow the arguments in the subsequent sections. In Section 3 we turn to the BSVD. We specify the problem to be solved formally, introduce the associated tridiagonal problems, and set up some notational conventions. Invoking [MR.sup.3] on symmetric tridiagonal matrices of even dimension that have a zero diagonal, so-called GolubKahan matrices, will be investigated in Section 4. Finally, Section 5 contains numerical experiments to evaluate our implementation.

The idea of using the [MR.sup.3] algorithm for the bsvd by considering suitable tseps is not new. A previous approach [19, 20, 21, 39] "couples" the three tseps involving the normal equations and the Golub-Kahan matrix in a way that ensures good orthogonality of the singular vectors and small residuals; see also Section 3.3.1. For a long time the standing opinion was that using [MR.sup.3] (or any other tsep solver) on the Golub-Kahan matrix alone is fundamentally flawed. In this paper we refute that notion, at least with regard to [MR.sup.3]. Indeed we provide a complete proof, including error bounds, showing that just a minor modification makes using [MR.sup.3] on the Golub-Kahan matrix a valid solution strategy for bsvd. This method is much simpler to implement and analyze than the coupling-based approach; in particular, all levels in the [MR.sup.3] representation tree (Figure 2.1) can be handled in a uniform way.

Before proceeding we want to mention that an alternative and highly competitive solution strategy for the SVD was only recently discovered by Drmac and Veselic [10, 11]. Their method first reduces a general matrix A to non-singular triangular form via rank-revealing QR factorizations, and then an optimized version of Jacobi's iteration is applied to the triangular matrix, making heavy use of the structure to save on operations and memory accesses. Compared to methods involving bidiagonal reduction, this new approach can attain better accuracy for certain classes of matrices (e.g., if A = [??]D with a diagonal "scaling" matrix D, then the achievable precision for the tiny singular values is determined by the condition number k2(A) instead of [K.sub.2]([??]), which may be considerably worse). Numerical experiments in [10, 11] also indicate that the new method tends to be somewhat faster than bidiagonal reduction followed by QR iteration on the bidiagonal matrix, but slightly slower than bidiagonal reduction and bidiagonal divide and conquer, in particular for larger matrices. As multi-step bidiagonalization (similarly to [2]) and replacing divide and conquer with the [MR.sup.3] algorithm may further speed up the bidiagonalization-based methods, the increased accuracy currently seems to come with a penalty in performance.

2. The [MR.sup.3] algorithm for the tridiagonal symmetric eigenproblem. The present paper relies heavily on the [MR.sup.3] algorithm for tsep and on its properties. A generic version of the algorithm has been presented in [35, 37], together with a proof that the eigensystems computed by [MR.sup.3] feature small residuals and sufficient orthogonality if five key requirements are fulfilled. In order to make the following exposition self-contained we briefly repeat some of the discussion on [MR.sup.3] from [37]; for details and proofs the reader is referred to that paper. Along the way we also introduce notation that will be used in the subsequent sections.

2.1. The algorithm. The "core" of the [MR.sup.3] method is summarized in Algorithm 2.1. In each pass of the main loop, the algorithm considers a symmetric tridiagonal matrix, which is represented by some data M, and tries to compute specified eigenpairs ([[lambda].sub.i], [q.sub.i]), i [member of] I. First, the eigenvalues of the matrix are determined to such precision that they can be classified as singletons (with sufficient relative distance to the other eigenvalues, e.g., agreement to at most three leading decimal digits if gaptol ~ [10.sup.-3]) and clusters. For singletons [[lambda].sub.i], a variant of a Rayleigh quotient iteration (RQI) and inverse iteration yields an accurate eigenpair. Clusters [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] cannot be handled directly. Instead, for each cluster one chooses a shift [TAU] [approximately equal to] [[lambda].sub.i] very close to (or even inside) the cluster and considers the matrix T - [TAU]|. The eigenvalues [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of that matrix will then feature much larger relative distances than [[lambda].sub.i], ... , [[lambda].sub.i+s] did, and therefore they may be singletons for T - [TAU]|, meaning that now eigenvectors can be computed in a reliable way. If some of these eigenvalues are still clustered, then the shifting is repeated. (To avoid special treatment, the original matrix T is also considered to be shifted with [bar.[TAU]] = 0.) Proceeding this way amounts to traversing a so-called representation tree with the original matrix T at the root, and children of a node standing for shifted matrices due to clusters; see Figure 2.1 for an example. The computation of eigenvectors corresponds to the leaves of the tree.

[FIGURE 2.1 OMITTED]

2.2. Representations of tridiagonal symmetric matrices. The name [MR.sup.3] comes from the fact that the transition from a node to its child, M-[TAU] =: [M.sup.+], must not change the invariant subspace of a cluster--and at least some of its eigenvalues--by too much (see Requirement RRR in Section 2.5). In general, this robustness cannot be achieved if the tridiagonal matrices are represented by their 2n--1 entries because those do not necessarily determine small eigenvalues to high relative precision. Therefore other representations are used, e.g., lower (upper) bidiagonal factorizations T = LDL* (T = URU*, resp.) with

D = diag([d.sub.1]; ... , [d.sub.n]) diagonal,

L = diag(1, ... , 1) + [diag.sub.-1] ([l.sub.1], ... , [l.sub.n-1]) lower bidiagonal,

R = diag([r.sub.1], ... , [r.sub.n]) diagonal, and

U = diag(1, ... , 1) + [diag.sub.+1]([u.sub.2], ... , [u.sub.n]) upper bidiagonal.

Note that we write * forthe transpose ofa matrix. The so-called twisted factorizations

T = [N.sub.k][G.sub.k][N*.sub.k]

with

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

generalize the bidiagonal factorizations. They are built by combining the upper part of an LDL* factorization and the lower part of a URU* factorization, together with the twist element [[gamma].sub.k]= [d.sub.k]+ [r.sub.k]? T(k,k) at twist index k.
Algorithm 2.1: [MR.sup.3] for tsep: Compute selected eigenpairs of
a symmetric tridiagonal T.

Input: Symmetric tridiagonal T [member of] [R.sup.nxn],
  index set [I.sub.0] [[subset].[bar]] {1, ... , n}
Output: Eigenpairs [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Parameter: gaptol, the gap tolerance

1. Find a suitable representation M0 for T, preferably definite,
possibly by shifting T.

2. S := {([M.sub.0], [I.sub.0], [bar.[tau]] = 0)}

3. while S [not equal to] [empty set] do

4. Remove one node (M, J, [bar.[tau]]) from S

5. Approximate eigenvalues [MATHEMATICAL EXPRESSION NOT
REPRODUCIBLE IN ASCII], of M such that they can be classified into
singletons and clusters according to gaptol; this gives a partition
I = [I.sub.1] [union] *** [union] [I.sub.m].

6. for r = 1 to m do

7.     if [I.sub.r] = {i} then // singleton

8.         Refine eigenvalue approximation [[[[lambda].sub.loc.i]]]
           and use it to compute [bar.q];. If necessary iterate until
           the residual of [[bar.q].sub.i] becomes small enough,
           using a Rayleigh quotient iteration (RQI).

9.         [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

10.    else // cluster

11.        Refine the eigenvalue approximations at the borders of
           (and/or inside) the cluster if desired for more accurate
           selection of shift.

12.        Choose a suitable shift t near the cluster and compute a
           Representation of M+ = M - t.

13.        Add new node ([M.sup.+], [I.sub.r], [bar.[tau]] + [tau])
           to S.

14.    endif

15.  endfor

16. endwhile


Twisted factorizations are preferred because, in addition to yielding better relative sensitivity, they also allow to compute highly accurate eigenvectors [6]. qd algorithms are used for shifting the factorizations, e.g., [LDL*?.sub.[tau]]I =: [L.sup.+][D.sup.+]([L.sup.+])*, possibly converting between them as in URU*? [tau]I =: [L.sup.+][D.sup.+]([L.sup.+])*.

The bidiagonal and twisted factorizations can rely on different data items being stored. To give an example, the matrix T = LDL*with unit lower bidiagonal L and diagonal D is defined by fixing the diagonal entries [d.sub.1], ..., [d.sub.n] of D and the subdiagonal entries [l.sub.1], ..., [l.sub.n-1] of L. We might as well use the offdiagonal entries T(1,2), ..., T(n - 1,n), together with [d.sub.1], ..., [d.sub.n], to describe the tridiagonal matrix and the factorization because the [l.sub.i] can be recovered from the relation T(i,i + 1) = [l.sub.i][d.sub.i]. The question of which data one should actually use to define a matrix leads to the concept of representations.

DEFINITION 2.1. A representation M of a symmetric tridiagonal matrix T [member of] [R.sup.nxn] is a set of m [less than or equal to] 2n-1 scalars, called the primary data, together with a mapping f : [R.sup.m][right arrow] [R.sup.2n-1] that generates the entries of T.

A general symmetric tridiagonal matrix T has m = 2n-1 degrees of freedom; however, m < 2n - 1 is possible if the entries of T obey additional constraints (e.g., a zero main diagonal).

2.3. Perturbations and floating-point arithmetic. In the following we often will have to consider the effect of perturbations on the eigenvalues (or singular values) and vectors.

Suppose a representation M of the matrix T is given by data [[delta].sub.i]. Then an elementwise relative perturbation (erp) of M to [??] is defined by perturbing each [[delta].sub.i] to [[??].sub.i]= [[delta].sub.i](1+[[xi].sub.i]) with "small" [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. To express this more compactly we will just write [??] = erp(M, [bar.[xi]]), [[delta].sub.i] [??][[??].sub.i], and although it must always be kept in mind that the perturbation applies to the data of the representation and not to the entries of T, we will sometimes write erp(T) for brevity.

A (partial) relatively robust representation (RRR) of a matrix T is one where small erps, bounded by some constant[??], in the data of the representation will cause only relative changes proportional to[??] in (some of) the eigenvalues and eigenvectors.

The need to consider perturbations comes from the rounding induced by computing in floating-point arithmetic. Throughout the paper we assume the standard model for floatingpoint arithmetic, namely that, barring underflow or overflow, the exact and computed results x and z of an arithmetic operation (+, -, *, / and[square root]) applied to floating-point numbers can be related as

x = z(1 + [gamma]) = z/(1 + [delta]), [absolute value of [gamma]], [absolute value of [delta]] [less than or equal to] [[epsilon].sub.o],

with machine epsilon [[epsilon].sub.[??]]. For IEEE double precision with 53-bit significands and eleven-bit exponents we have [[epsilon].sub.[??]] = 2?53[approximately equal to] 1.1 x [10.sup.-16]. For more information on binary floating-point arithmetic and the IEEE standard we refer the reader to [17, 23, 24, 26].

2.4. Eigenvalues and invariant subspaces. The eigenvalues of a symmetric matrix A are real, and therefore they can be ordered ascendingly, [[lambda].sub.1][A] [less than or equal to] ... [less than or equal to] [[lambda].sub.n][A], where the matrix will only be indicated if it is not clear from the context. The associated (orthonormal) eigenvectors are denoted by [q.sub.i][A], and the invariant subspace spanned by a subset of the eigenvectors is [Q.sub.i][A] := span{[q.sub.i][A] : i [member of] I}.

The sensitivity of the eigenvectors depends on the eigenvalue distribution--on the overall spread, measured by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] or the spectral diameter spdiam[A] = [[lambda].sub.n]-[[lambda].sub.1], as well as on the distance of an eigenvalue [[lambda].sub.i] from the remainder of the spectrum. In a slightly more general form, the latter aspect is quantified by the notion of gaps, either in an absolute or a relative sense,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

see [37, Sect. 1]. Note that [mu] may, but need not, be an eigenvalue.

The following Gap Theorem [37, Thm. 2.1] is applied mostly in situations where I corresponds to a singleton ([absolute value of I] = 1) or to a cluster of very close eigenvalues. The theorem states that if we have a "suspected eigenpair" ([mu],x) with small residual, then x is indeed close to an eigenvector (or to the invariant subspace associated with the cluster) provided that [mu] is sufficiently far away from the remaining eigenvalues. For a formal definition of the (acute) angle see [37, Sect. 1].

THEOREM 2.2 (Gap Theorem for an invariant subspace). For every symmetric matrix A [member of] [R.sup.nxn], unit vector x, scalar [mu] and index set I, such that [gap.sub.A](I;[mu]) [not equal to] 0,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For singletons, the Rayleigh quotient also provides a lower bound for the angle to an eigenvector.

THEOREM 2.3 (Gap Theorem with Rayleigh's quotient, [30, Thm. 11.7.1]). For symmetric A [member of] [R.sup.nxn] and unit vector x with [theta] = [[rho].sub.A](x) := x * Ax, let [lambda] = [[lambda].sub.i][A] be an eigenvalue of A such that no other eigenvalue lies between (or equals) [lambda] and [theta], and q = [q.sub.i][A] the corresponding normalized eigenvector. Then we will have [gap.sub.A]({i};[theta]) > 0 and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

2.5. Correctness of the [MR.sup.3]algorithm and requirements for proving it. In the analysis of the [MR.sup.3]algorithm in [37] the following five requirements have been identified, which together guarantee the correctness of Algorithm 2.1.

REQUIREMENT RRR (relatively robust representations). There is a constant [C.sub.vecs]such that for any perturbation [??] = erp(M,[alpha]) at a node (M,I), the effect on the eigenvectors can be controlled as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for all J [member of] {I,[I.sub.1], ..., [I.sub.r]} with [absolute value of J] < n.

This requirement also implies that singleton eigenvalues and the boundary eigenvalues of clusters cannot change by more than O([C.sub.vecs]n[alpha][absolute value of [lambda]]) and therefore are relatively robust.

REQUIREMENT ELG (conditional element growth). There is a constant [C.sub.elg] such that for any perturbation [??] = erp(M,[alpha]) at a node (M,I), the incurred element growth is bounded by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This requirement concerns the absolute changes to matrix entries that result from relative changes to the representation data. For decomposition-based representations this is called element growth (elg). Thus the requirement is fulfilled automatically if the matrix is represented by its entries directly. The two conditions convey that even large element growth is permissible (first condition), but only in those entries where the local eigenvectors of interest have tiny entries (second condition).

REQUIREMENT RELGAPS (relative gaps). For each node (M,I), the classification of I into child index sets in step 5 of Algorithm 2.1 is done such that for r = 1, ..., m, [relgap.sub.M]([I.sub.r]) [greater than or equal to] gaptol (if [absolute value of [I.sub.r]] < n).

The parameter gaptol is used to decide which eigenvalues are to be considered singletons and which ones are clustered. Typical values are gaptol ~ 0.001 ... 0.01. Besides step 5, where fulfillment of the requirement should not be an issue if the eigenvalues are approximated accurately enough and the classification is done sensibly, this requirement also touches on the outer relative gaps of the whole local subset at the node. The requirement cannot be fulfilled if [relgap.sub.M](I) < gaptol. This fact has to be kept in mind when the node is created, in particular during evaluation of shifts for a new child in step 12.

REQUIREMENT SHIFTREL (shift relation). There exist constants [[alpha].sub.[down arrow]],[[alpha].sup.[up arrow]]such that for every node with matrix H that was computed using shift [tau] as child of M, there are perturbations

[??] = erp(M,[[alpha].sub.[down arrow]]) and [??] = erp(H,[[alpha].sup.[up arrow]])

with which the exact shift relation [??] ? [tau] = [??] is attained.

This requirement connects the nodes in the tree. It states that the computations of the shifted representations have to be done in a mixed relatively stable way. This is for example fulfilled when using twisted factorizations combined with qd-transformations as described in [8]. Improved variants of these techniques and a completely new approach based on block decompositions are presented in [35, 36, 38]. Note that the perturbation [??] = erp(M,[[alpha].sub.[down arrow]]) at the parent will in general be different for each of its child nodes, but each child node has just one perturbation governed by [[alpha].sup.[up arrow]] to establish the link to its parent node.

REQUIREMENT GETVEC (computation of eigenvectors). There exist constants [[alpha].sub.[double dagger]], [[beta].sub.[double dagger]] and [R.sub.gv] with the following property: Let ([[bar.[lambda].sup.leaf],[bar.q]) with [bar.q] = [[bar.q].sub.i]be computed at node (M,I), where[[bar.[lambda].sup.leaf] is the final local eigenvalue approximation. Then we can find elementwise perturbations to the matrix and the vector,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for which the residual norm is bounded as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This final requirement captures that the vectors computed in step 8 must have residual norms that are small, even when compared to the eigenvalue. The keys to fulfill this requirement are qd-type transformations to compute twisted factorizations M - [bar.[lambda] =: [N.sub.k][G.sub.k][N.sup.*.sub.k] with mixed relative stability and then solving one of the systems [N.sup.k][G.sup.k][N.sup.*.sub.k][bar.q] = [[gamma].sub.k][e.sub.k]for the eigenvector [8, 12, 31].

In practice, we expect the constants [C.sub.vecs]and [C.sub.elg]to be of moderate size (~10), [[alpha].sub.[down arrow]], [[alpha].sup.[up arrow]], and [[alpha].sub.[double dagger]] should be O([[epsilon].sub.[??]]), whereas [[beta].sub.[double dagger]] = O(n[[epsilon].sub.[??]]), and Rgv may become as large as O(1/gaptol). Thus the following theorems provide bounds resid[M.sub.0]= O(n[[epsilon].sub.[??]][parallel][M.sub.0][parallel]/gaptol) for the residuals and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for the orthogonality.

THEOREM 2.4 (Residual norms for [MR.sup.3][37, Thm. 3.1]). Let the representation tree traversed by Algorithm 2.1 satisfy the requirements ELG, SHIFTREL, and GETVEC. For given index j [member of] [I.sub.0], let d = depth(j) be the depth of the node where [bar.q] = [[bar.q].sub.j]was computed (cf. Figure 2.1) and [M.sub.0],[M.sub.1], ..., [M.sub.d] be the representations along the path from the root ([M.sub.0],[I.sub.0]) to that node, with shifts [[tau].sub.i] linking [M.sub.i] and [M.sub.i+1], respectively. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The following theorem confirms the orthogonality of the computed eigenvectors and bounds their angles to the local invariant subspaces. It combines Lemma 3.4 and Theorem 3.5 from [37].

THEOREM 2.5. Let the representation tree traversed by Algorithm 2.1 fulfill the requirements RRR, RELGAPS, SHIFTREL, and GETVEC. Then for each node (M,I) in the tree with child index set J [subset or equal to] I, the computed vectors [[bar.q].sub.j], j [member of] J, will obey

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [kappa] := [R.sub.gv]n[[epsilon].sub.[??]]+ [[beta].sub.[double dagger]]. Moreover, any two computed vectors [[bar.q].sub.i]and [[bar.q].sub.j], i [not equal to] j, will obey

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [d.sub.max]:= max{depth(i)|i [member of] [I.sub.0]} denotes the maximum depth of a node in the tree.

3. The singular value decomposition of bidiagonal matrices. In this section we briefly review the problem BSVD and its close connection to the eigenvalue problem for tridiagonal symmetric matrices.

3.1. The problem. Throughout this paper we consider B [member of] [R.sup.nxn], an upper bidiagonal matrix with diagonal entries [a.sub.i] and offdiagonal elements [b.sub.i], that is,

B = diag([a.sub.1], ..., [a.sub.n]) + [diag.sub.+1]([b.sub.1], ..., [b.sub.n-1]).

The goal is to compute the full singular value decomposition

(3.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The columns [u.sub.i]= U(:,i) and [v.sub.i] = V(:,i) are called left and right singular vectors, respectively, and the [[sigma].sub.i]are the singular values. Taken together, ([[sigma].sub.i],[u.sub.i],[v.sub.i]) form a singular triplet of B. Note that we order the singular values ascendingly in order to simplify the transition between BSVD and TSEP.

For any algorithm solving BSVD, the computed singular triplets ([[bar.[sigma]].sub.i],[[bar.u].sub.i],[[bar.v].sub.i]) should be numerically orthogonal in the sense

(3.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [absolute value of x] is to be understood componentwise. We also desire small residual norms,

(3.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In the literature the latter is sometimes stated as the singular vector pairs being "(well) coupled."

3.2. Singular values to high relative accuracy. In [4] Demmel and Kahan established that every bidiagonal matrix (represented by entries) determines its singular values to high relative accuracy.

The current state-of-the-art for computing singular values is the dqds-algorithm by Fernando and Parlett [14, 32], which b[u.sub.i]lds upon [4] as well as Rutishauser's original qdalgorithm [34]. An excellent implementation of dqds is included in LAPACK in the form of routine xLASQ1. Alternatively, bisection could be used, but this is normally much slower--in our experience it becomes worthwhile to use bisection instead of dqds only if less than ten percent of the singular values are desired (dqds can only be used to compute all singular values).

The condition (3.3) alone does merely convey that each computed [[bar.[sigma]].sub.i]must lie within distance O([parallel]B[parallel]n[[epsilon].sub.[??]]) of some exact singular value of B. A careful but elementary argument based on the Gap Theorem 2.2 (applied to the Golub-Kahan matrix, see below) shows that (3.2) and (3.3) combined actually provide for absolute accuracy in the singular values, meaning each computed [[bar.[sigma]].sub.i]lies within distance O([parallel]B[parallel]n[[epsilon].sub.[??]]) of the exact [[sigma].sub.i]. To achieve relative accuracy, a straightforward modification is just to recompute the singular values afterwards using, for example, dqds. It is clear that doing so cannot spoil (3.3), at least as long as [[bar.[sigma]].sub.i]was computed with absolute accuracy. The recomputation does not even necessarily be overhead; for [MR.sup.3]-type algorithms like those we study in this paper one needs initial approximations to the singular values anyway, the more accurate the better. So there is actually a gain from computing them up front to full precision.

3.3. Associated tridiagonal problems. There are two standard approaches to reduce the problem BSVD to TSEP, involving three different symmetric tridiagonal matrices.

3.3.1. The normal equations. From (3.1) we can see the eigendecompositions of the symmetric tridiagonal matrices BB*and B*B to be

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

These two are called normal equations, analogously to the linear least squares problem. The individual entries of BB*and B*B can be expressed using those of B:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Arguably the most straightforward approach to tackle the BSVD would be to just employ the [MR.sup.3]algorithm for TSEP (Algorithm 2.1) to compute eigendecompositions of BB*and B*B separately. This gives both left and right singular vectors as well as the singular values (twice). A slight variation on this theme would compute just the vectors on one side, for example BB*= U?2U*, and then get the rest through solving Bv = u[sigma]. As BB*and B*B are already positive definite bidiagonal factorizations, we would naturally take them directly as root representations, avoiding the mistake to form either matrix product explicitly.

In short, this black box approach is a bad idea. While the matrices [bar.U] and [bar.V] computed via the two TSEPs are orthogonal almost to working precision, the residuals [parallel]B[[bar.v].sub.i]?[[bar.u].sub.i][[bar.[sigma]].sub.i][parallel] and [parallel]B*[[bar.u].sub.i]? [[bar.v].sub.i][[bar.[sigma]].sub.i][parallel] may be O([[sigma].sub.i]) for clustered singular values, which is unacceptable for large [[sigma].sub.i]. Roughly speaking, this comes from computing [bar.U] and [bar.V] independently--so there is no guarantee that the corresponding [[bar.u].sub.i]and [[bar.v].sub.i]"fit together." Note that this problem is not tied to taking [MR.sup.3]as eigensolver but also occurs if QR or divide and conquer are used to solve the two TSEPs independently.

With [MR.sup.3] it is, however, possible to "couple" the solution of the two TSEPs in a way that allows to control the residuals. This is done by running [MR.sup.3] on only one of the matrices BB* or B*B, say BB*, and "simulating" the action of [MR.sup.3]on B*B with the same sequence of shifts, that is, with an identical representation tree; cf. Figure 2.1. The key to this strategy is the observation that the quantities that would be computed in [MR.sup.3]on B*B can also be obtained from the respective quantities in the BB*-run via so-called coupling relations. For several reasons the Golub-Kahan matrix (see the following discussion) is also involved in the couplings. See [19, 20, 21, 39] for the development of the coupling approach and [35] for a substantially revised version.

In our experiments, however, an approach based entirely on the Golub-Kahan matrix turned out to be superior, and therefore we will not pursue the normal equations and the coupling approach further in the current paper.

3.3.2. The Golub-Kahan matrix. Given an upper bidiagonal matrix B we obtain a symmetric eigenproblem of twice the size by forming the Golub-Kahan (GK) matrix or Golub-Kahan form of B [13],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [P.sub.ps] is the perfect shuffle permutation on R2nthat maps any x [member of] [R.sup.2n] to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is easy to verify that [T.sub.GK](B) is a symmetric tridiagonal matrix with a zero diagonal and the entries of B interleaved on the offdiagonals,

[T.sub.GK](B) = [diag.sub.[+ or -]1]([a.sub.1],[b.sub.1],[a.sub.2],[b.sub.2],...,[a.sub.n?1],[b.sub.n-1],[a.sub.n]),

and that its eigenpairs are related to the singular triplets of B via

([sigma],u,v) is a singular triplet of B with [parallel]u[parallel] = [parallel]v[parallel] = 1 iff ([+ or -][sigma],q) are eigenpairs or [T.sub.GK](B), where [parallel]q[parallel] = 1,q = 1/[square root of 2][P.sub.ps] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Thus v makes up the odd-numbered entries in q and u the even-numbered ones:

(3.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It will frequently be necessary to relate rotations of GK eigenvectors q to rotations of their u and v components. This is captured in the following lemma. The formulation has been kept fairly general; in particular the permutation [P.sub.ps]is left out, but the claim does extend naturally if it is reintroduced.

LEMMA 3.1. Let q,q' be non-orthogonal unit vectors that admit a conforming partition

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let [[psi].sub.u]:= [angle](u,u'), [[psi].sub.v]:= [angle](v,v')and [psi] := [angle](q,q'). Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. Define r such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The resulting situation is depicted in Figure 3.1. Consequently,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now u' cos [psi] = u - [r.sub.u] implies (u' - u) cos [psi] = (1 - cos[psi])u - [r.sub.u]. Use the reverse triangle inequality and [parallel]u[parallel] < 1 for

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and divide by cos[psi] [not equal to] 0 to obtain the desired bound for [absolute value of [parallel]u'[parallel] [parallel]u[parallel]]. The claims pertaining to the v components are shown analogously.

Application to a given approximation q' for an exact GK eigenvector q merely requires to exploit [parallel]u[parallel] = [parallel]v[parallel] = 1/[square root of 2]. In particular, the second claim of Lemma 3.1 will then enable us to control how much the norms of u' and v' can deviate from 1/[square root of 2], namely basically by no more than sin[psi] + O([sin.sup.2][psi]), provided [psi] is small, which will be the case in later applications. (For large [psi], the bound in the lemma may be larger than the obvious max [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], given that all these vectors have length at most 1.)

[FIGURE 3.1 OMITTED]

3.4. Preprocessing. Before actually solving the BSVD problem, the given input matrix B should be preprocessed with regard to some points. In contrast to TSEP, where it suffices to deal with the offdiagonal elements, now all entries of B are involved with the offdiagonals of [T.sub.GK](B), which makes preprocessing a bit more difficult.

If the input matrix is lower bidiagonal, work with B*instead and swap the roles of U and V. Multiplication on both sides by suitable diagonal signature matrices makes all entries nonnegative, and we can scale to get the largest elements into proper range. Then, in order to avoid several numerical problems later on, it is highly advisable to get rid of tiny entries by setting them to zero and splitting the problem. To summarize, we should arrive at

(3.5) n[[epsilon].sub.[??]][parallel]B[parallel] < min{[a.sub.i],[b.sub.i]}.

However, splitting a bidiagonal matrix to attain (3.5) by setting all violating entries to zero is not straightforward. Two issues must be addressed.

If an offdiagonal element [b.sub.i] is zero, B is reducible and can be partitioned into two smaller bidiagonal problems. If a diagonal element [a.sub.i] is zero then B is singular. An elegant way to "deflate" one zero singular value is to apply one sweep of the implicit zero-shift QR method, Which will yield a matrix B' with [b'.sub.i-1]= [b'.sub.n-1]= [a'.sub.n]= 0, cf.[4, p.21]. Thus the zero singular value has been revealed and can now be removed by splitting into three upper bidiagonal parts [B.sub.1:i-1], [B.sub.i:n?1] and [B.sub.n,n], the latter of which is trivial. An additional benefit of the QR sweep is a possible preconditioning effect for the problem [19], but of course we will also have to rotate the computed vectors afterwards.

The second obstacle is that using (3.5) as criterion for setting entries to zero will impede computing the singular values to high relative accuracy with respect to the input matrix. There are splitting criteria which retain relative accuracy, for instance those employed within the zero-shift QR algorithm [4, p. 18] and the slightly stronger ones by Li [28, 32]. However, all these criteria allow for less splitting than (3.5).

To get the best of both, that is, extensive splitting with all its benefits as well as relatively accurate singular values, we propose a 2-phase splitting as follows:

1) Split the matrix as much as possible without spoiling relative accuracy. This results in a partition of B into blocks [B.sup.(1).sub.rs], ..., [B.sup.(N).sub.rs], which we call the relative split of B.

2) Split each block [B.sup.(i).sub.rs] further aggressively into blocks [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] to achieve (3.5). We denote the collection of subblocks [B.sup.(i,j).sub.as] as absolute split of B.

3) Solve BSVD for each block in the absolute split independently.

4) Use bisection to refine the computed singular values of each block [B.sup.(i,j).sub.as] to high relative accuracy with respect to the parent block [B.sup.(i).sub.rs] in the relative split.

Since the singular values of the blocks in the absolute split retain absolute accuracy with respect to B, the requirements (3.2) and (3.3) will still be upheld. In fact, if dqds is used to precompute the singular values (cf. Section 3.2) one can even skip steps 1) and 4), since the singular values that are computed for the blocks of the absolute split are discarded anyways. The sole purpose of the separate relative split is to speed up the refinement in step 4).

We want to stress that we propose the 2-phase splitting also when only a subset of singular triplets is desired. Then an additional obstacle is to get a consistent mapping of triplet indices between the blocks. This can be done efficiently, but it is not entirely trivial.

4. [MR.sup.3]and the Golub-Kahan matrix. In this section we investigate the approach to use [MR.sup.3]on the Golub-Kahan matrix to solve the problem BSVD.

A black box approach would employ [MR.sup.3] "as is," without modifications to its internals, to compute eigenpairs of [T.sub.GK](B) and then extract the singular vectors via (3.4). Here the ability of [MR.sup.3]to compute partial spectra is helpful, as we need only concern ourselves with one half of the spectrum of [T.sub.GK](B). Note that using [MR.sup.3]this way would also offer to compute only a subset of singular triplets at reduced cost; current solution methods for BSVD like divide-and-conquer or QR do not provide this feature.

The standing opinion for several years has been that there are fundamental problems involved which cannot be overcome, in particular concerning the orthogonality of the extracted left and right singular vectors. The main objective of this section is to refute that notion.

We start our exposition with a numerical experiment to indicate that using [MR.sup.3] as a pure black box method on the Golub-Kahan matrix is indeed not a sound idea.

EXAMPLE 4.1. We used LAPACK's test matrix generator DLATMS to construct a bidiagonal matrix with the following singular values, ranging between 0.9 * [10.sup.?8] and 110.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then we formed the symmetric tridiagonal matrix [T.sub.GK](B) [member of] [R.sup.40x40]explicitly. The [MR.sup.3] implementation DSTEMR from LAPACK 3.2.1 was called to give us the upper 20 eigenpairs ([[bar.[sigma]].sub.i], [[bar.q].sub.i]) of [T.sub.GK](B). The matrix is well within numerical range, so that DSTEMR neither splits nor scales the tridiagonal problem. The singular vectors were then extracted via

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The results are shown in Figure 4.1. The left plot clearly shows that DSTEMR does its job of solving the eigenproblem posed by [T.sub.GK](B). But the right plot conveys just as clearly that the extracted singular vectors are far from being orthogonal. In particular, the small singular values are causing trouble. Furthermore, the u and v components have somehow lost their property of having equal norm. However, their norms are still close enough to one that normalizing them explicitly would not improve the orthogonality levels significantly.

This experiment is not special--similar behavior can be observed consistently for other test cases with small singular values. The explanation is simple: [MR.sup.3]does neither know, nor care, what a Golub-Kahan matrix is. It will start just as always, by first choosing a shift outside the spectrum, say [tau] [??] -[[sigma].sub.n], and compute [T.sub.GK](B)-[tau] = [L.sub.0][D.sub.0][L.sup.*.sub.0]as positive definite root representation. From there it will then deploy further shifts into the spectrum of [L.sub.0][D.sub.0][L.sup.*.sub.0] to isolate the requested eigenpairs.

What happens is that the first shift to the outside smears all small singular values into one cluster, as shown in Figure 4.2. Consider for instance we have [parallel]B[parallel] [greater than or equal to] 1 and are working with the standard gaptol = 0.001. We can even assume the initial shift was done exactly; so let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be the eigenvalues of [L.sub.0][D.sub.0][L.sup.*.sub.0]. Then for all indices i with [[sigma].sub.i] [??] 0.0005 the

[FIGURE 4.1 OMITTED]

[FIGURE 4.2 OMITTED]

corresponding [[lambda].sup.(0).sub.[+ or -]i]will belong to the same cluster of [L.sub.0][D.sub.0][L.sup.*.sub.0], since their relative distance is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Therefore, for such a singular triplet ([[sigma].sub.i],[u.sub.i],[v.sub.i]) of B, both of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] will be eigenvectors associated with that cluster of [T.sub.GK](B). Hence, further (inexact) shifts based on this configuration cannot guarantee to separate them again cleanly. Consequently, using [MR.sup.3]as black box on the Golub-Kahan matrix in this fashion could in principle even produce eigenvectors q with identical u or v components.

This problem is easy to overcome. After all we know that the entries of [T.sub.GK](B) form an RRR, so the initial outside shift to find a positive definite root representation is completely unnecessary--we can just take [M.sub.0]:= [T.sub.GK](B) directly as root. For shifting, that is, for computing a child representation M+= [T.sub.GK](B)?[mu] on the first level, a special routine exploiting the zero diagonal should be employed. If M+is to be a twisted factorization this is much easier to do than standard dtwqds; see [13, 25] and our remarks in [38, Sect. 8.3]. With this setting, small singular values can be handled by a (positive) shift in one step, without danger of spoiling them by unwanted contributions from the negative counterparts. This solution method is sketched in Algorithm 4.1. Note that we now have heterogeneous representation types in the tree, as the root [T.sub.GK](B) is represented by its entries. In any case, our general setup of [MR.sup.3]and its proof in [35, 37] can handle this situation.
Algorithm 4.1: [MR.sup.3]on the Golub-Kahan matrix. Compute specified
singular triplets of bidiagonal B using the [MR.sup.3]algorithm on
[T.sub.GK](B).

Input: Upper bidiagonal B [member of] [R.sup.nxn], index set
  [I.sub.0] [subset or equal to] {1,...,n}

Output: Singular triplets ([[bar.[sigma]].sub.i],[[bar.u].sub.i],
  [[bar.v].sub.i]),i [member of] [I.sub.0]

1. Execute the [MR.sup.3]algorithm for TSEP (Algorithm 2.1), but
   take [M.sub.0]:= [T.sub.GK](B) as root representation in step 1,
   using the entries of B directly. This gives eigenpairs
   ([[bar.[sigma]].sub.i],[[bar.q].sub.i]),i [member of] [I.sub.0].

2. Extract the singular vectors via
   [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


One can argue that the approach is still flawed on a fundamental level. Grosser gives an example in [19] which we want to repeat at this point. In fact his argument can be fielded against using any TSEP-solver on the Golub-Kahan matrix for BSVD.

EXAMPLE 4.2 (cf. Beispiel 1.33 in [19]). Assume the exact GK eigenvectors

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

form (part of) the basis for a cluster. The computed vectors will generally not be exact, but might for instance be [G.sub.rot][P.sup.*.sub.ps] [[q.sub.i]|[q.sub.j]], where [G.sub.rot]is a rotation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], in the 2-3 plane. We end up with computed singular vectors

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

that have orthogonality levels [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

However, this rotation does leave the invariant subspace spanned by [q.sub.i] and [q.sub.j](cf. Lemma 4.4 below), so if [s.sup.2]is large, the residual norms of [[bar.q].sub.i]and [[bar.q].sub.j]would suffer, too.

That the extracted singular vectors can be far from orthogonal even if the GK vectors are fine led Grosser to the conclusion that there must be a fundamental problem. Until recently we believed that as well [39, p. 914]. However, we will now set out to prove that with just a small additional requirement, Algorithm 4.1 will actually work. This is a new result and shows that there is no fundamental problem in using [MR.sup.3]on the Golub-Kahan matrix. Of particular interest is that the situation in Example 4.2--which, as we mentioned, would apply to all TSEP solvers on [T.sub.GK]--can be avoided if [MR.sup.3]is deployed as in Algorithm 4.1.

The following definition will let us control the danger that the shifts within [MR.sup.3]lose information about the singular vectors.

DEFINITION 4.3. A subspace S of [R.sup.2nx2n]with orthonormal basis [([q.sub.i]).sub.i[member of]I] is said to have GK structure if the systems [([u.sub.i]).sub.i[member of]I] and [([v.sub.i]).sub.i[member of]I] of vectors extracted according to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

are orthonormal each.

The special property of a GK matrix is that all invariant subspaces belonging to (at most) the first or second half of the spectrum have GK structure. As eigenvectors are shift-invariant, this property carries over to any matrix that can be written as [T.sub.GK](B)?[mu] for suitable B, which is just any symmetric tridiagonal matrix of even dimension with a constant diagonal.

The next lemma reveals that the u and v components of every vector within a subspace with GK structure have equal norm. Thus the actual choice of the orthonormal system ([q.sub.i]) in Definition 4.3 is irrelevant.

LEMMA 4.4. Let the subspace S [subset or equal to] [R.sup.2nx2n] have GK structure. Then for each s [member of] S,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Proof. As S has GK structure, we have an orthonormal basis ([q.sub.1],...,[q.sub.m]) for S such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with orthonormal [u.sub.i]and [v.sub.i]. Each s [member of] S can be written as s = [alpha]1[q.sub.1]+ ... + [[alpha].sub.m][q.sub.m], and therefore

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since the [u.sub.i] and [v.sub.j] are orthonormal we have [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Now comes the proof of concrete error bounds for Algorithm 4.1. The additional requirement we need is that the local subspaces are kept "near" to GK structure. We will discuss how to handle this requirement in practice afterwards.

For simplicity we assume that the call to [MR.sup.3]in step 1 of Algorithm 4.1 produces perfectly normalized vectors, [parallel][[bar.q].sub.i][parallel] = 1, and that the multiplication by [square root of 2] in step 2 is done exactly.

THEOREM 4.5 (Proof of correctness for Algorithm 4.1). Let Algorithm 4.1 be executed such that the representation tree built by [MR.sup.3]satisfies all five requirements listed in Section 2.5. Furthermore, let each node (M,I) have the property that a suitable perturbation [[??].sub.GK]= erp(M,[[xi].sub.GK]) can be found such that the subspace [Q.sub.i][ [[??].sub.GK]] has GK structure. Finally, let [resid.sub.GK] and [orth.sub.GK] denote the right-hand side bounds from Theorem 2.4 and from the second inequality in Theorem 2.5, respectively. Then the computed singular triplets will satisfy

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where A := [orth.sub.GK]+ [C.sub.vecs]n[[xi].sub.GK]/gaptol.

Proof. As all requirements for [MR.sup.3]are fulfilled, Theorems 2.4 and 2.5 apply for the computed GK eigenpairs ([[bar.[sigma]].sub.i],[[bar.q].sub.i]).

We will first deal with the third bound concerning the residual norms. Invoke the definition of the Golub-Kahan matrix to see

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and then use Theorem 2.4 to obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For orthogonality, consider indices i and j and let (M,N) be the last common ancestor of i and j, i.e., the deepest node in the tree such that i [member of] I and j [member of] J for different child index sets I,J [subset or equal to] N. The bound [orth.sub.GK] on the right-hand side in the second inequality in Theorem 2.5 is just the worst-case for the first inequality in that theorem, taken over all nodes in the tree. Hence we have

sin[angle]([[bar.q].sub.i], [Q.sub.i][M]) [less than or equal to] [orth.sub.GK].

As we assume that the representation M fulfills Requirements RRR and RELGAPS, we can link [[bar.q].sub.i]to the nearby matrix [[??].sub.GK]by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This means we can find a unit vector q [member of] [Q.sub.i][ [[??].sub.GK]] with sin[angle]([[bar.q].sub.i],q)[less than or equal to] A.

Now [Q.sub.i][ [[??].sub.GK]] [subset or equal to] QN[ [[??].sub.GK]] has GK structure. By Lemma 4.4 we can therefore partition

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Let [U.sub.I][[[??].sub.GK]] denote the subspace spanned by the u components of vectors in [Q.sub.i][ [[??].sub.GK]]. Thus u [member of] [U.sub.I][ [[??].sub.GK]], and Lemma 3.1 gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

as well as the desired property [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) for the norms. Repeat the steps above for j to arrive at sin [angle]([[bar.u].sub.j], [U.sub.J][ [[??].sub.GK]]) [less than or equal to] [square root of 2]A. We can write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since [Q.sub.N][ [[??].sub.GK]] has GK structure and I [intersection] J = 0, the spaces [U.sub.I][ [[??].sub.GK]] and UJ[ [[??].sub.GK]] are orthogonal, and in particular x [perpendicular to] y. Therefore

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where we made use of x*y = 0 for the first inequality and invoked the Cauchy-Schwartz inequality for the second one. Together with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], this yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The bounds for the right singular vectors viare obtained analogously.

One conclusion from Theorem 4.5 is that it really does not matter if we extract the singular vectors as done in step 2 of Algorithm 4.1 by multiplying the q subvectors by[square root of 2], or if we normalize them explicitly.

ThenewrequirementthatwasintroducedinTheorem4.5isstatedminimally, namelythat the representations M can be perturbed to yield local invariant subspaces with GK structure. In this situation we say that the subspace of M "nearly" has GK structure. At the moment we do not see a way to specifically test for this property. However, we do know that any even-dimensioned symmetric tridiagonal matrix with a constant diagonal is just a shifted Golub-Kahan matrix, so trivially each subspace (within one half) has GK structure. Let us capture this.

DEFINITION 4.6. If for a given representation of symmetric tridiagonal M there exists an elementwise relative perturbation

[??] = erp(M,[xi]) such that [??](i,i) [equivalent to] c,

then we say that M has a nearly constant diagonal, in short M is ncd, or, if more detail is to be conveyed, M [member of] ncd(c) or M [member of] ncd(c,[xi]).

Clearly, the additional requirement for Theorem 4.5 is fulfilled if all representations in the tree are ncd. Note that a representation being ncd does not necessarily imply that all diagonal entries are about equal, because there might be large local element growth. For example, LDL*can be ncd even if [absolute value of [d.sub.i]] >> [absolute value of (LDL*)(i,i)] for some index i, cf. Example 4.8 below.

The ncd property can easily and cheaply be verified in practice, e.g., for an LDL*factorization with the condition [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all i > 1. Note that the successively shifted descendants of a Golub-Kahan matrix can only violate the ncd property if there was large local element growth at some diagonal entries on the way.

REMARK 4.7. Since Theorem 4.5 needs the requirement SHIFTREL anyway, the shifts [T.sub.GK](B) - [mu] = M+ to get to level one must be executed with mixed relative stability. Therefore, all representations on level one will automatically be ncd and as such always fulfill the extra requirement of having subspaces near to GK structure, independent of element growth or relative condition numbers.

The preceding theoretical results will be demonstrated in action by numerical experiments in Section 5. Those will confirm that Algorithm 4.1 is indeed a valid solution strategy for BSVD. However, it will also become apparent that working with a Golub-Kahan matrix as root can sometimes be problematic in practice. The reason is that Golub-Kahan matrices are highly vulnerable to element growth when confronted with a tiny shift.

EXAMPLE 4.8. (Cf. [36, Example 1.2]) Let [alpha] << 1 (e.g., [alpha] ~ [[epsilon].sub.[??]]) and consider the bidiagonal matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with singular values [[sigma].sub.1][approximately equal to] [alpha], [sigma]2[approximately equal to] 2. Shifting [T.sub.GK](B) by -[alpha] gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Clearly there is huge local element growth in D(2). This LDL*still is ncd, but if we had to shift it again the property would probably be lost completely.

The thing is that we really have no way to avoid a tiny shift if clusters of tiny singular values are present. In [35, 36] a generalization to twisted factorizations called block factorizations is investigated. The latter are especially suited for shifting Golub-Kahan matrices and essentially render the above concerns obsolete.

5. Numerical results. In this section we present the results that were obtained with our prototype implementation of Algorithm 4.1, XMR-[T.sub.GK], on two test sets Pract and Synth. We also compare to XMR-CPL, which implements the coupling approach for running [MR.sup.3] on the normal equations; cf. Section 3.3.1.

Most of the bidiagonal matrices in the test sets were obtained from tridiagonal problems T in two steps: (1) T was scaled and split to enforce [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (2) For each unreduced subproblem we chose a shift to allow a Cholesky decomposition, yielding an upper bidiagonal matrix.

The Pract test set contains 75 bidiagonal matrices with dimensions up to 6245. They were obtained in the above manner from tridiagonal matrices from various applications. For more information about the specific matrices see [5], where the same set was used to evaluate the symmetric eigensolvers in LAPACK.

The Synth set contains 19240 bidiagonal matrices that stem from artificially generated tridiagonal problems, including standard types like Wilkinson matrices as well as matrices with eigenvalue distributions built into LAPACK's test matrix generator DLATMS. In fact, all artificial types listed in [29] are present.

For each of these basic types, all tridiagonal matrices up to dimension 100 were generated. Then these were split according to step (1) above. For the resulting tridiagonal subproblems we made two further versions by gl[u.sub.i]ng [9, 33] them to themselves: either two copies with a small glue [??] [parallel]T[parallel]n[[epsilon].sub.[??]]or three copies with two medium O([parallel]T[parallel]n [square root of [[epsilon].sub.[??]]]) glues. Finally, step (2) above was used to obtain bidiagonal factors of all unreduced tridiagonal matrices.

Further additions to Synth include some special bidiagonal matrices B that were originally devised by Benedikt Grosser. These were glued as well. However, special care was taken that step (1) above would not affect the matrix B*B for any one of these extra additions.

The code XMR-[T.sub.GK] is based on a prototype [MR.sup.3]TSEP solver, XMR, which essentially implements Algorithm 4.1. XMR differs from the LAPACK implementation DSTEMR mainly in the following points.

* DSTEMR relies on twisted factorizations T = [N.sub.k][G.sub.k][N.sup.*.sub.k], represented by the nontrivial entries [d.sub.1], ..., [d.sub.k-1=, [[gamma].sub.k], [r.sub.k+1], ..., [r.sub.n] from the matrix [G.sub.k] in (2.1) and the offdiagonal entries [l.sub.1], ..., [l.sub.k?1], [u.sub.k+1], ..., [u.sub-n+ from [N.sub.k], whereas XMR uses the same entries from Gk, together with the n - 1 offdiagonals [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of the tridiagonal matrix T. This "e-representation" provides somewhat smaller error bounds at comparable cost; see [35, 38] for more details.

* Even if the relative robustness (Requirement RRR) and moderate element growth (Requirement ELG) cannot always be guaranteed before actually performing a shift, sufficient a priori criteria are available. These have been improved in XMR.

* Several other modifications have been incorporated to enhance robustness and efficiency, e.g., in the interplay of Rayleigh quotient iteration and bisection, and in the bisection strategy.

An optimized production implementation of XMR is described in [37].

XMR-[T.sub.GK] adapts the tridiagonal XMR to the BSVD by using [T.sub.GK](B) as root representation. To cushion the effect of moderate element growth on the diagonal we also switched to using "Z-representations" for the children nodes. These representations again use the entries of [G.sub.k], together with the n?1 quantities [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and they provide even sharper error bounds, albeit at higher cost; cf. [35, 38]. In addition to the checks in XMR, a shift candidate has to be ncd(-[[bar.[mu]],32n[[epsilon].sub.[??]]) in order to be considered acceptable a priori; see the discussion following Theorem 4.5.

As the coupled approach is not discussed in the present paper, we can only briefly hint at the main features of the implementation XMR-CPL; see [35] for more details. XMR-CPL essentially performs XMR on the Golub-Kahan matrix ("central layer") and uses coupling relations to implicitly run the [MR.sup.3]algorithm simultaneously on the matrices BB*and B*B as well ("outer layers"). Just like XMR-[T.sub.GK] we use Z-representations in the central layer, and the representations there have to fulfill the same ncd-condition, but the other a priori acceptance conditions in XMR are only checked for the outer representations. Eigenvalue refinements are done on the side that gives the better a priori bound for relative condition. To counter the fact that for the coupled approach we cannot prove that SHIFTREL holds always, appropriate consistency checks with Sturm counts are done for both outer representations.

Table 5.1 summarizes the orthogonality levels and residual norms of XMR-[T.sub.GK] and XMR-CPL on the test sets. XMR-[T.sub.GK] works amazingly well. Indeed, the extracted vectors have better orthogonality than what LAPACK's implementation DSTEMR provides for B*B alone, and they are not much worse than those delivered by XMR.

The coupled approach works also well on Pract, but has some undeniable problems with Synth. Indeed, not shown in the tables is that for 24 of the cases in Synth, XMR-CPL failed to produce up to 2.04% of the singular triplets. The reason is that for those cases there were clusters where none of the tried shift candidates satisfied the aforementioned consistency checks for the child eigenvalue bounds to replace the missing SHIFTREL. Note that these failures are not errors, since the code did flag the triplets as not computed.

Finally let us consider the matrix from Example 4.1, which yields unsatisfactory orthogonality with a "black box" [MR.sup.3]on the Golub-Kahan matrix (see Example 4.1) and large residuals with black box [MR.sup.3]on the normal equations (not shown in this paper). By contrast, both XMR-[T.sub.GK] and XMR-CPL solve this problem with worst orthogonality levels of 1.15n[[epsilon].sub.[??]] and BSVD-residual norms 0.68[parallel]B[parallel]n[[epsilon].sub.[??]]. Interestingly these two numbers are identical for both methods, whereas the computed vectors differ.

The accuracy results would mean that the coupled approach is clearly outclassed by using [MR.sup.3]on the Golub-Kahan matrix in the fashion of Algorithm 4.1, if it were not for efficiency. Counting the subroutine calls reveals that XMR-CPL does more bisections (for checking the couplings) and more RQI steps (to compute the second vector), but these operations are on size-n matrices, whereas the matrices in XMR-[T.sub.GK] all have size 2n. Thus we expect XMR-CPL to perform about 20 ? 30% faster than XMR-[T.sub.GK].

These results give in fact rise to a third method for BSVD, namely a combination of the first two: Use [MR.sup.3]on the Golub-Kahan matrix [T.sub.GK](B) like in Algorithm 4.1, but employ the coupling relations to outsource the expensive eigenvalue refinements to smaller matrices of half the size. This approach would retain the increased accuracy of XMR-[T.sub.GK] at reduced cost, without the need for coupling checks. The catch is that we still need the "central layer" (translates of [T.sub.GK]) to be robust for XMR-[T.sub.GK], but to do the eigenvalue computations with one "outer layer" (translates of BB*or B*B) the representation there has to be robust as well. This would be a consequence of Theorem 5.2 in [21], but its proof contains a subtle error. The combined method is new and sounds promising, in particular if block factorizations (introduced in [35, 36]) are used to increase the accuracy. At the moment we favor XMR-[T.sub.GK] because it leads to a much leaner implementation and can profit directly from any improvement in the underlying tridiagonal [MR.sup.3]algorithm.

Acknowledgments. The authors want to thank Osni Marques and Christof Vomel for providing them with the Pract test matrices and the referees for their helpful suggestions.

REFERENCES

[1] E. ANDERSON, Z. BAI, C. H. BISCHOF, L. S. BLACKFORD, J. W. DEMMEL, J. J. DONGARRA, J. DU CROZ, A. GREENBAUM, S. J. HAMMARLING, A. MCKENNEY, AND D. SORENSEN, LAPACK Users' Guide, 3rd ed., SIAM, Philadelphia, 1999.

[2] C. H. BISCHOF, B. LANG, AND X. SUN, A framework for symmetric band reduction, ACM Trans. Math. Software, 26 (2000), pp. 581-601.

[3] J. J. M. CUPPEN, A divide and conquer method for the symmetric tridiagonal eigenproblem, Numer. Math., 36 (1981), pp. 177-195.

[4] J. W. DEMMEL AND W. KAHAN, Accurate singular values of bidiagonal matrices, SIAM J. Sci. Statist. Comput., 11 (1990), pp. 873-912.

[5] J. W. DEMMEL, O. A. MARQUES, B. N. PARLETT, AND C. VOMEL, Performance and accuracy of LAPACK's symmetric tridiagonal eigensolvers, SIAM J. Sci. Comput., 30 (2008), pp. 1508-1526.

[6] I. S. DHILLON, A new O(n2) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem, Ph.D. Thesis, Computer Science Division, University of California, Berkeley, 1997.

[7] I. S. DHILLON AND B. N. PARLETT, Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices, Linear Algebra Appl., 387 (2004), pp. 1-28.

[8] --, Orthogonal eigenvectors and relative gaps, SIAM J. Matrix Anal. Appl., 25 (2004), pp. 858-899.

[9] I. S. DHILLON, B. N. PARLETT, AND C. VOMEL, Glued matrices and the MRRR algorithm, SIAM J. Sci. Comput., 27 (2005), pp. 496-510.

[10] Z. DRMAC C AND K. VESELIC, New fast and accurate Jacobi SVD algorithm I., SIAM J. Matrix Anal. Appl., 29 (2007), pp. 1322-1342.

[11] --, New fast and accurate Jacobi SVD algorithm II., SIAM J. Matrix Anal. Appl., 29 (2007), pp. 1343-1362.

[12] K. V. FERNANDO, On computing an eigenvector of a tridiagonal matrix. I. Basic results, SIAM J. Matrix Anal. Appl., 18 (1997), pp. 1013-1034.

[13] --, Accurately counting singular values of bidiagonal matrices and eigenvalues of skew-symmetric tridiagonal matrices, SIAM J. Matrix Anal. Appl., 20 (1998), pp. 373-399.

[14] K. V. FERNANDO AND B. N. PARLETT, Accurate singular values and differential qd algorithms, Numer. Math., 67 (1994), pp. 191-229.

[15] J. G. F. FRANCIS, The QR transformation: a unitary analogue to the LR transformation I., Comput. J., 4 (1961/62), pp. 265-272.

[16] --, The QR transformation II., Comput. J., 4 (1961/62), pp. 332-345.

[17] D. GOLDBERG, What every computer scientist should know about floating-point arithmetic, ACM Computing Surveys, 23 (1991), pp. 5-48.

[18] G. H. GOLUB AND W. KAHAN, Calculating the singular values and pseudo-inverse of a matrix, J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp. 205-224.

[19] B. GROSSER, Ein paralleler und hochgenauer O(n2) Algorithmus fur die bidiagonale Singularwertzerlegung, Ph.D. Thesis, Fachbereich Mathematik, Bergische Universitat Gesamthochschule Wuppertal, Wuppertal, Germany, 2001.

[20] B. GROSSER AND B. LANG, An O(n2) algorithm for the bidiagonal SVD, Linear Algebra Appl., 358 (2003), pp. 45-70.

[21] --, On symmetric eigenproblems induced by the bidiagonal SVD, SIAM J. Matrix Anal. Appl., 26 (2005), pp. 599-620.

[22] M. GU AND S. C. EISENSTAT, A divide-and-conquer algorithm for the symmetric tridiagonal eigenproblem, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 172-191.

[23] IEEE, IEEE Standard 754-1985 for Binary Floating-Point Arithmetic, Aug. 1985.

[24] --, IEEE Standard 754-2008 for Floating-Point Arithmetic, Aug. 2008.

[25] W. KAHAN, Accurate eigenvalues of a symmetric tri-diagonal matrix, Tech. Report CS41, Computer Science Department, Stanford University, July 1966.

[26] --, Lecture notes on the status of IEEE standard 754 for binary floating point arithmetic, 1995. http://www.cs.be[r.sub.k]eley.edu/~wkahan/ieee754status/IEEE754.PDF.

[27] B. LANG, Reduction of banded matrices to bidiagonal form, Z. Angew. Math. Mech., 76 (1996), pp. 155-158.

[28] R.-C. LI, Relative perturbation theory. I. Eigenvalue and singular value variations, SIAM J. Matrix Anal. Appl., 19 (1998), pp. 956-982.

[29] O. A. MARQUES, C. VOMEL, J. W. DEMMEL, AND B. N. PARLETT, Algorithm 880: a testing infrastructure for symmetric tridiagonal eigensolvers, ACM Trans. Math. Software, 35 (2008), pp. 8:1-8:13.

[30] B. N. PARLETT, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Cliffs, NJ, 1980.

[31] B. N. PARLETT AND I. S. DHILLON, Fernando's solution to Wilkinson's problem: an application of double factorization, Linear Algebra Appl., 267 (1997), pp. 247-279.

[32] B. N. PARLETT AND O. A. MARQUES, An implementation of the dqds algorithm (positive case), Linear Algebra Appl., 309 (2000), pp. 217-259.

[33] B. N. PARLETT AND C. VOMEL, The spectrum of a glued matrix, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 114-132.

[34] H. RUTISHAUSER, Der Quotienten-Differenzen-Algorithmus, Z. Angew. Math. Phys., 5 (1954), pp. 233-251.

[35] P. R. WILLEMS, On [MR.sup.3]-type Algorithms for the Tridiagonal Symmetric Eigenproblem and the Bidiagonal SVD,Ph.D.Thesis, FachbereichMathematikundNaturwissenschaften, Bergische Universitat Wuppertal, Wuppertal, Germany, 2010.

[36] P. R. WILLEMS AND B. LANG, Block factorizations and qd-type transformations for the [MR.sup.3]algorithm, Electron. Trans. Numer. Anal., 38 (2011), pp. 363-400. http://etna.math.kent.edu/vol.38.2011/pp363-400.dir.

[37] --, A framework for the [MR.sup.3]algorithm: theory and implementation, Preprint BUW-SC 2011/2, Fachbereich Mathematik und Naturwissenschaften, Bergische Universitat Wuppertal, Wuppertal, Germany, 2011.

[38] --, Twisted factorizations and qd-type transformations for the [MR.sup.3]algorithm--new representations and analysis, Preprint BUW-SC 2011/3, Fachbereich Mathematik und Naturwissenschaften, Bergische Universitat Wuppertal, Wuppertal, Germany, 2011.

[39] P. R. WILLEMS, B. LANG, AND C. VOMEL, Computing the bidiagonal SVD using multiple relatively robust representations, SIAM J. Matrix Anal. Appl., 28 (2007), pp. 907-926.

M. Hochstenbach. This work was carried out while P. Willems was with the Faculty of Mathematics and Natural Sciences at the University of Wuppertal. The research was partially funded by the Bundesministerium fur Bildung und Forschung, contract number 01IH 08 007 B, within the project ELPA--Eigenwert-Loser fur Petaflop-Anwendungen.

PAUL R. WILLEMS ([dagger]) AND BRUNO LANG ([double dagger])

([dagger]) (willems@math.uni-wuppertal. de).

([double dagger]) University of Wuppertal, Faculty of Mathematics and Natural Sciences, GauBstr. 20, D-42097 Wuppertal (lang@math.uni-wuppertal.de).

* Received November 25, 2011. Accepted January 3, 2012. Published online March 5, 2012. Recommended by
TABLE 5.1

Comparison of the orthogonality levels max {|U*U - I|, |V*V - I|}
/ n[[epsilon].[??]] and the residual norms [max.sub.i]{[parallel]B
[[bar.v].sub.i] - [[bar.u].sub.i][[bar.[sigma].sub.i]][parallel],
[parallel]B*[[bar.u].sub.i] - [[bar.i][[bar.[sigma]].sub.i]||} /
[parallel]B[parallel]n[[epsilon].sub.[??]] of xmr-tgk and xmr-cpl. The
lines below MAX give the percentages of test cases with maximum
residual and loss oforthogonality, respectively, in the indicated
ranges.

Pract (75 cases)                           Synth (19240 cases)
  XMR-TGK   XMR-CPL                          XMR-TGK    XMR-CPL
Orthogonality level max {|U*U - I|, |V*V - I|} /
  n[[epsilon].sub.[??]]
  5.35       10.71             AVG             5.34       6.33
  2.71       2.44              MED             1.38       1.01
 48.40        154              MAX              3095     27729
 81.33%      82.67%         0 ... 10          92.59%      91.04%
 18.67%      14.67%         10...100           7.04 %    8.61%
              2.67%        100... 200          0.12%     0.21%
                         200... 500            0.11%     0.10%
                         500 ...[10.sup.3]     0.07%     0.02%
                         [10.sup.3] ...        0.06%     0.03%
                         [10.sup.6]
Residual norms [max.sub.i] {[parallel]B[[bar.v].sub.i] -
[[bar.u].sub.i] - [[bar.v].sub.i][[bar.[sigma]].sub.i]
[parallel]} [parallel]B[parallel]n[[epsilon].sub.[??]]
  0.35      15.78%             AVG             0.45       3.14
  0.07       1.37              MED             0.13       0.72
  4.19        453              MAX             118        6873
 92.00%     34.67%           0 ... 1         84.96 %     57.45%
  8.00%     50.67%          1 ... 10          15.03%     35.50%
             8.00%         10 ... 100                    7.00%
             6.67%            > 100           0.01 %     0.06%
COPYRIGHT 2012 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Willems, Paul R.; Lang, Bruno
Publication:Electronic Transactions on Numerical Analysis
Article Type:Report
Geographic Code:4EUGE
Date:Jan 1, 2012
Words:11516
Previous Article:Block factorizations and QD-type transformations for the [MR.sup.3] algorithm.
Next Article:Collocation methods based on radial basis functions for the coupled Klein-Gordon-Schrodinger equations.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |