Printer Friendly

Revisiting the inverse field of values problem.

Dedicated to Lothar Reichel on the occasion of his 60th birthday

1. Introduction. Let (H, <*, *>) be a Hilbert space and B(H) denote the set of bounded linear operators H [right arrow] H. The field of values (also known as the numerical range) of a linear operator T : H [right arrow] H is the set of complex numbers

F(T) = {<T w,w>/<w, w> : w [member of] H, <w, w> [not equal to] 0}.

Thus, the field of values is the set of all Rayleigh quotients of T. The numerical range is a useful tool in the study of matrices and operators (see, e.g., [5, 7] and references therein), which has applications in the stability analysis of dynamical systems and the convergence theory of matrix iterations, among others.

The field of values is a convex and compact set. The compactness follows readily from the fact that F (T) is the image of the compact unit sphere in H under the continuous mapping x [right arrow] (Tx, x}. The convexity of F(T) was pointed out by Toeplitz [11] and Hausdorff [6] in the first decades of the last century.

Given a point z [member of] F (T), the inverse field of values problem is to determine a unit vector [w.sub.z] [member of] H such that z = <T[w.sub.z], [w.sub.z]>, in which case z is referred to as a Ritz value. Such a vector [w.sub.z] is called a generating vector for z. The inverse field of values problem was firstly proposed by Uhlig in [13], motivated by the importance of the condition 0 [member of] F(T) for the stability of continuous or discrete systems [??] = Tx or [x.sub.k+1] = T [x.sub.k]. The problem is to find the solution [w.sub.z] [member of] H of the two coupled-equations

[w.sup.*.sub.z] T[w.sub.z] = z, [w.sup.*.sub.z][w.sub.z] = 1.

Hence, this is an algebraic problem consisting of a system of two complex quadratic equations in the complex components of [w.sub.z]. Computing an algebraic solution can be performed by computer algebra systems such as Mathematica, but this works only for moderate dimensions. Also an analytic approach using the Lagrange multipliers formalism makes sense, however, this is only feasible for low dimensions. We are interested in finding solution vectors in cases of dimensions larger than those where algebraic or analytic methods can provide one, such as n being of the order of hundreds or thousands.

Following Uhlig, we shall use the acronym "FOV" for "field of values". The inverse FOV (iFOV) problem attracted the attention of several authors, e.g., of Carden [2] and Meurant [10], and different methods of solution have been proposed. A new method that is simpler and faster than the existing ones has been presented in [3], and it provides accurate numerical results where the previous ones often fail, namely for points very close to the boundary of the field of values. In these algorithms, most of the computing time is spent computing eigenvalues and eigenvectors of the Hermitian part of the matrix. In [2, 3] the minimum number of eigenvectors and eigenvalues analysed is two.

This paper has two main goals. The first one is to provide algorithms to plot quickly and accurately the field of values. The second goal is to revisit the iFOV problem and to propose an alternative and simpler approach. Our method is conceptually straightforward since it is essentially based on the reduction to the 2 x 2 case. It requires a small number of eigenvector computations, sometimes only one, and compares well in execution time and in error with the existing ones in the literature.

This paper is organized as follows. In Section 2, the main ideas used in our method are presented. In Section 3, two alternative algorithms for plotting the field of values of a general complex matrix are proposed. We emphasize that these algorithms play a crucial role in the solution of the iFOV problem and remarkably improve the approximation of the FOV's boundary. Section 4 provides an overview of the approaches to the iFOV problem. In Section 5, this problem is solved for a matrix of arbitrary size by a reduction to the two-dimensional case. In Section 6, some numerical examples that illustrate the theory are given, and the different approaches are compared. The figures included are numerically computed by using MATLAB 7.8.0.347 (R2009a).

2. General properties of F. From now on we shall consider H = [C.sup.n]. Let [M.sub.n] be the algebra of n x n complex matrices. Writing the Cartesian decomposition of T [member of] [M.sub.n], that is, T = H(T) + iK(T), where

H(T) = [T + [T.sup.*]]/2 and K(T) = [T - [T.sup.*]]/2i

are Hermitian matrices, we can easily conclude that F(H(T)) is the orthogonal projection of F(T) onto the real axis and F(K(T)) is the orthogonal projection of F(T) onto the imaginary axis.

We recall that a supporting line of a convex set S [subset] C is a line containing a boundary point of S and defining two half planes such that one of them does not contain S. For [theta] [member of] [0,2n), if we consider a supporting line of F(T) perpendicular to the direction of slope [theta], the orthogonal projection of the numerical range onto this direction is given by F(H([e.sup.-i[theta]]T)).

The following well-known properties of F are directly related to the subject of this note. For their proofs we refer the interested reader to [7].

1. [sigma](T) [subset] F(T), where [sigma](T) denotes the spectrum (set of eigenvalues) of T.

2. For any unitary matrix U, F([U.sup.*]TU) = F(T).

3. For any complex number z, F(T + zI) = F(T) + z.

4. F (T) is a real line segment if and only if T is Hermitian.

5. If T is normal, then F(T) = Co{[sigma](T)}, where Co{*} denotes the convex hull of {.}.

6. Let [x.sub.[theta]] be a unit eigenvector associated with the largest or the smallest eigenvalue of H([e.sup.-i[theta]]T). Then, [z.sub.[theta]] = [x.sup.*.sub.[theta]]T[x.sub.[theta]] [member of] [??]F(T), the boundary of F(T). Furthermore, it holds that [z.sub.[theta]] [member of] [L.sub.[theta]] [intersection] F (T) where [L.sub.[theta]] is the tangent line at [z.sub.[theta]].

7. The boundary of F (T) is a piecewise algebraic curve, and each of its non-differentiable boundary points is an eigenvalue of T.

2.1. F as the union of elliptical discs. For P being a two-dimensional orthogonal projection, the restriction of PTP to the range of P is called a two-dimensional compression of T. For completeness, we recall the following important result, usually known as the Elliptical Range Theorem (for a proof see, e.g., [7] or [14]).

THEOREM 2.1. Let T [member of] [M.sub.2]. Then F (T) is a (possibly degenerate) closed elliptical disc whose foci are the eigenvalues of T, [[lambda].sub.1] and [[lambda].sub.2]. The Cartesian equation of the boundary of this elliptical disk is

[X.sup.2]/[M.sup.2] + [Y.sup.2]/[N.sup.2] = 1/4,

where

X = (x - Re(c)) cos [gamma] + (y - Im(c)) sin [gamma],

Y = (x - Re(c)) sin [gamma] + (y - Im(c)) cos [gamma],

c = ([[lambda].sub.1] + [[lambda].sub.2])/2 is the center of the ellipse, and [gamma] is the slope of the line segment with endpoints [[lambda].sub.1] and [[lambda].sub.2]. The lengths of the major and minor axis of the ellipse are

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

respectively.

The following result is fundamental in our approach to the iFOV problem.

THEOREM 2.2. (Marcus and Pesce [9]) For any T [member of] [M.sub.n],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where

(2.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with u and V varying over all pairs of real orthonormal vectors.

The Marcus-Pesce Theorem is applicable to the problem of devising an effective procedure for constructing the field of values of an arbitrary n x n complex matrix. The idea is to generate a reasonable complete set of orthonormal pairs [??], [??], not necessarily real, and actually compute the union of the elliptical discs [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This is the so-called inside-out approach, an alternative to the outside-in approach (see, e.g., [8]), which consists of determining the smallest and the largest eigenvalues of H ([e.sup.-i[theta]]T) for various values of [theta] [member of] [0,2[pi]). These eigenvalues provide sharp approximations to the projection of the field of values onto the direction [theta]. The collection of these sharp bounds gives an outer polygonal approximation to F(T), whose sides are tangent to [partial derivative]F(T) (see property 6 in Section 2). This procedure may be extended to the infinite-dimensional setting as well as to linear unbounded operators.

3. Algorithms for plotting F for a general complex matrix. The main purpose of this section is to provide two alternative algorithms for numerically determining F (T) within some prescribed tolerance tol. These algorithms are used in our solution method for the iFOV problem. In [9], a MS-BASIC program for plotting F(T) was presented based on formula (2.1) in the case T being a real matrix. Our algorithms improve the Marcus-Pesce algorithm as they work efficiently for complex matrices and large dimensions. Instead of using randomly generated vectors u and v, we make use of suitably chosen vectors u and v which generate boundary points of the numerical range. Throughout, we denote by span{u, v} the subspace generated by the linearly independent vectors u, v, and by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], two orthonormal vectors in span{u, v}. (1)

Algorithm A

I. Check if T is Hermitian. If so, compute its smallest and largest eigenvalues, and the line joining them is F (T). If not, proceed to the next step.

II. Check T for normality (T[T.sup.*] = [T.sup.*]T). If T is normal, compute its eigenvalues, and their convex hull is F(T). If T is not normal, continue.

III. Set [[theta].sub.k] = (k - 1)[pi]/m, k = 1, ..., m for some positive integer m [greater than or equal to] 3.

IV Starting with k = 1, up to k = m, take the following steps:

i. Compute eigenvectors [u.sub.k] and [v.sub.k] associated, respectively, with the largest and smallest eigenvalue of the Hermitian matrix H ([e.sup.-i[theta]]k T).

ii. Compute the compression of T to span {[u.sub.k], [v.sub.k]}, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

iii. Compute the boundary [[GAMMA].sub.k] of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

iv. If k < m, take the next k value and return to i.

V Plot the convex-hull of the collection of curves [[GAMMA].sub.1], ..., [[GAMMA].sub.m] as an approximation to F(T).

Algorithm A may not be efficient in the case T is unitarily similar to a direct sum of matrices because in this case the field of values of the compressed matrices may degenerate into line segments. In this event, the next modified algorithm, which essentially differs in the choice of generating vectors for boundary points, is more convenient.

Algorithm B

Steps I, II, and III are as in Algorithm A.

IV Compute the eigenvectors [u.sub.1] and [v.sub.1] associated with the largest and smallest eigenvalue of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

V Starting with k = 2, up to k = m, take the following steps:

i. Compute the eigenvectors [u.sub.k] and [v.sub.k] associated with the largest and smallest eigenvalue of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], respectively.

ii. Compute [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], the compressions of T to span{[u.sub.k], [u.sub.k-1]} and to span{[v.sub.k], [v.sub.k-1]}, respectively.

iii. Compute and draw [[GAMMA].sub.k] and [[lambda].sub.k], the boundaries of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], respectively.

iv. If k < m, take the next k value and return to i. Otherwise, continue.

VI. Take the following steps.

i. Compute [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], the compressions of T to span{[v.sub.1], [u.sub.m]} and to span{[v.sub.1], [u.sub.m]}, respectively.

ii. Compute [[GAMMA].sub.1] and [[LAMBDA].sub.1], the boundaries of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], respectively.

VII. Take the convex hull of the collection of curves [[GAMMA].sub.1], ..., [[GAMMA].sub.m], [[LAMBDA].sub.1], ..., [[LAMBDA].sub.m] as an approximation to F (T).

3.1. Accuracy and examples. Algorithm B provides a much better accuracy in the approximation of F(T) than Algorithm A. Both algorithms behave especially well when compared with the "outside-in" approach, which merely provides a polygonal approximation of F(T) and hence requires a much finer mesh to reach a convenient accuracy. The effort necessary to plot F(T) to a desired accuracy in these algorithms depends essentially on the shape. For a fixed shape, the effort is of the same order as that of solving the Hermitian eigenvalue-eigenvector problem necessary to generate a boundary point, thus of the order of O([n.sup.3]).

The same arguments as in [8] suggest that the accuracy of Algorithm A increases with m as [m.sup.2]. Since in Algorithm B, the boundary of F is approximated by arcs of ellipses which are tangent to the boundary at two of its points consecutively determined, one expects that the respective accuracy increases with m as [m.sup.4]. In Johnson's method the boundary is piece-wisely approximated by line segments, which depend linearly on some convenient coordinate, so the error involved is of the order of 1/[m.sup.2]. In Algorithm B, the boundary is piece-wisely approximated by arcs which have well defined slopes at the end points. These arcs must be at least cubic curves, so the error involved is of the order of 1/[m.sup.4]. Since the boundary points produced by a given mesh distribute themselves on the boundary with a density which is inversely proportional to the local radius of curvature of the boundary, it follows that the local accuracy is also inversely proportional to the radius of curvature. This is consistent with the observation that, with respect to computational effort, the most convenient interpolation between two successively computed points on the boundary of F (T) is by the unique cubic which has the correct slopes at the interpolated points.

In a recent paper, Uhlig [12] addresses the accurate computation of the boundary of F (T) by a method which is similar to Algorithm B. His careful numerical analysis supports the expected fast convergence of this algorithm.

EXAMPLE 3.1. For the plots in Figure 3.1, the 500 x 500 complex matrix A in [3, p. 202] is considered. This matrix is constructed from the complex matrix B = F + iM, where F is the Fiedler matrix F = ([absolute value of (i - j)]) and M is the Moler matrix M = [U.sup.T]U with the upper triangular matrix U with [u.sub.i,j] = -1 for j > i. By adding 3 + 5i multiplied by the 500 x 500 matrix whose entries are ones, we obtain the matrix A (in MATLAB notation A = B + (-3 + 5i)ones(500)). The execution time of the different algorithms for this matrix is compared in Table 3.1. The subroutine fv.m of the MATLAB "Matrix Computation Toolbox" has been used to implement Johnson's algorithm.

We note that the accuracy of Algorithms A and B may be improved by choosing a finer mesh.

4. The iFOV problem: the state of the art. Uhlig [13] provided a complicated solution to the inverse field of values problem that initially generates points which surround the given point z by using the fact that points on the boundary of F(T) and their generating vectors can be computed by Johnson's method [8]. Uhlig's method proceeds with a randomized approach to surround z tighter and tighter by successive attempts and iteratively refines the generating vector approximation.

In continuation, Carden [2] pointed out the connection between the iFOV problem and iterative eigensolvers and presented a simpler method of solution based on the following procedure:

1. The solution of the 2 x 2 case by reduction to a form that is useful for proving the FOV convexity.

2. Using property 6, the given point z is surrounded by points on the FOV boundary and associated generating vectors are determined.

3. From the triangle defined by three points on the FOV boundary that surround z and by convexity, generating vectors for the endpoints of a line containing z are determined. By compression to the subspace spanned by these vectors, iFOV is solved by reduction to the 2 x 2 case.

A more efficient method for the iFOV problem than either those in [2, 13] was presented in [3] along the following lines:

1. A direct solution of the 2 x 2 case.

2. The generation of points on the FOV boundary according to property 6. The value z is surrounded by ellipses determined by compressions defined by the generating vectors of all possible pairs of vertices of the inner approximation. For each ellipse, its intersection with the real axis is determined. If in any ellipse there are points on both sides of z, then the corresponding generating vectors are used to reduce to the 2 x 2 case and to solve the iFOV problem.

3. Bisection on the angles is used to refine the inner approximation in the direction of z.

A relevant aspect of [2, 3] is the surrounding of the desired point by four points on the FOV boundary determined by considering the largest and the smallest eigenvalues of the Hermitian matrices H ([e.sup.-i[theta]'] T) and H ([e.sup.-i[theta]"] T) for 0" - 0' = [pi]/2. In our approach, boundary points are considered that are similarly determined, however, they may not surround the desired point. It is only being required that these points belong to one of the elliptical discs. Moreover, we find it advantageous to relax the restriction [theta]" - [theta]' = [pi]/2.

4.1. Analytic solution of the iFOV problem in the 2 x 2 case. Following Carden [2], given T [member of] [M.sub.2] and z [member of] C, we exactly determine a unit vector [w.sub.z] [member of] [C.sup.2] such that z = [w.sup.*.sub.z]T[w.sub.z]. Without loss of generality, we may assume that the trace of T is zero. (If the trace was nonzero, we would subtract Tr (T/2) from T and from z.) Under this assumption, the eigenvalues sum up to zero and may be denoted as [lambda] and -[lambda]. Let us assume that the eigenvalues of T are nonzero. Furthermore, we may assume that the eigenvalues of T are real. (To accomplish this, we need to multiply both T and z by [e.sup.-1[psi]] where [lambda] = [ae.sup.i[psi]], a > 0.) A unitary matrix U can be found such that T is transformed into Schur form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where c is real. Since the field of values is invariant under unitary similarity transformations, the value z does not have to be altered. Having in mind that any 2 x 2 matrix can be shifted, scaled, and unitarily transformed into this form, we solve the inverse problem for [??]

Let z = x + iy G F([??]). Without loss of generality, we may consider the unit vector [w.sub.z] [member of] [C.sup.2] in the form [w.sub.z] = [(cos u, [e.sup.i[phi]] sin u).sup.T]. We find

[w.sup.*.sub.z] [??][w.sub.z] = a cos 2u + c/2 sin 2u cos [phi] + i c/2 sin 2u sin [phi],

which easily gives

(4.1) cos 2u = 4ax [+ or -] [square root of ([c.sup.4] + 4[a.sup.2][c.sup.2] - 4[c.sup.2][x.sup.2] - (4[c.sup.2] + 16[a.sup.2]) [y.sup.2])]/4[a.sup.2] + [c.sup.2]

This relation determines u, and the relation

(4.2) sin [phi] = 2y/c sin 2u

determines [phi]. Hence, given z [member of] F([??]), the values u and [phi] that specify the generating vector [w.sub.z] must satisfy the above relations, and in terms of the matrix T, the generating vector is U[w.sub.z].

If [c.sup.4] + 4[a.sup.2] [c.sup.2] - 4[c.sup.2][x.sup.2] - (4[c.sup.2] + 16[a.sup.2])[y.sup.2] [greater than or equal to] 0 (this inequality corresponds to x + iy being inside the ellipse), it is always possible to determine [w.sub.z] such that x + iy = [w.sup.*.sub.z]T[w.sub.z]/[w.sup.*.sub.z][w.sub.z] and conversely. The sought after solution is not necessarily unique [2] because for x + iy in the interior of F(T), there exist two linearly independent vectors determined by u and u - [pi]/4, for u [not equal to] [pi]/4. If u = [pi]/4, then [w.sub.z] is a point on the boundary, and the generating vector is unique provided that the ellipse is nondegenerate. If c = 0, then F (T) is the line segment defined by the two eigenvalues, and the solution is unique only for the end points. If T = 0, then F(T) is a singleton, and any unit vector in [C.sup.2] is a generating vector.

The above ideas are now used to develop an algorithm to solve the iFOV problem in the 2 x 2 case, which will be crucial for solving the general case. Let u, v be two orthonormal vectors belonging to span{u, v}. Given the two-dimensional compression [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of T to span{u, v} (cf. (2.1)) and for [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], take the following steps

1. Determine the eigenvalues of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [[lambda].sub.1], [[lambda].sub.2]. Construct a unitary matrix U which takes [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] into Schur form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

2. Let [z.sub.0] = ([[lambda].sub.1] + [[lambda].sub.2])/2 and a = [absolute value of ([[lambda].sub.1] - [[lambda].sub.2])]/2. Thus,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

3. A generating vector [[zeta].sup.(0)] = [([[zeta].sup.(0).sub.1], [[zeta].sup.(0).sub.2]).sup.T] of [[omega].sub.0] is easily found using (4.1) and (4.2).

Hence, [zeta] = [([[zeta].sub.1], [[zeta].sub.2]).sup.T] = U[[zeta].sup.(0)] is a generating vector of z.

5. Algorithms for the inverse FOV problem. Given an n x n complex matrix T and z = x + iy [member of] C, and let [member of] > 0 denote some prescribed tolerance (e.g., [member of] = [10.sup.-16] [parallel]T[parallel] for a double precision computation). Our approach to the iFOV problem is based on the Marcus-Pesce Theorem having in mind that we can exactly generate any point in the interior of the ellipses as well as on its boundary as described in Section 4.1.

Algorithm A'

I. Discretization of the interval [0, [pi]]. For some positive integer m [greater than or equal to] 2, choose an m-mesh [[theta].sub.k] = [pi] (k - 1)/m, for k = 1, ..., m.

II. For k [member of] {1, ..., m} starting with k = 1, take the following sub-steps.

i. Construct the matrix [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], and compute its largest and smallest eigenvalues. Determine a pair of associated eigenvectors [u.sub.k] and [v.sub.k]. In each step this is a well defined Hermitian eigenvalue-eigenvector problem.

ii. If [absolute value of ([[lambda].sub.min] ([T.sub.k]) - [[lambda].sub.max] ([T.sub.k]))] < [member of], then F(T) is approximately a line segment so that T is approximately either Hermitian or skew-Hermitian (or a complex shift of one of these). If the line segment degenerates to a point, then T is a scalar matrix. In either of these cases, it can be easily decided if z belongs to F(T) and if so, a generating vector may be determined. Otherwise, continue.

iii. Check whether the given point z = x + iy belongs to the intersection of the following half-planes

x cos [[theta].sub.k] + y sin [[theta].sub.k] [less than or equal to] [[lambda].sub.max] ([T.sub.k])

x cos [[theta].sub.k] + y sin [[theta].sub.k] [greater than or equal to] [[lambda].sub.min] ([T.sub.k]).

If z is not inside the above half-planes intersection, then z [not member of] F(T). Otherwise, continue.

iv. Take the compression Tuk Vk of the matrix T to span{uk, vk}. If [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and continue to III. Otherwise, take the next value of k and go to i.

III. The generating vector of z [member of] F(T) is given by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [??], [??] are orthonormal vectors such that (2.1) holds, and [[zeta].sub.1][[zeta].sub.2] are defined as in item 3 in Section 4.1.

Next we introduce a slight modification to Algorithm A' changing the form of how the compressions are generated. This modification allows the treatment of some deficiencies of Algorithm A' involving the degeneracy of the elliptical discs into line segments.

Algorithm B'

I. The same as step I of Algorithm A'.

II. The same as sub-steps i, ii, and iii of step II of Algorithm A' with k = 1.

III. For k G {2, ..., m}, starting with k = 2, take the sub-steps i, ii, and iii of step II of Algorithm A'.

iv. Take the compression [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of T to span{[u.sub.k-i], [u.sub.k]}. If [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and continue to IV. Otherwise, take the compression [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of T to span{[v.sub.k-1], [v.sub.k]}. If [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and continue to IV. Otherwise and if k < m, take the next value of k and go to i of this step.

IV. Compute [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], the compressions of T to span{[u.sub.1], [v.sub.m]} and to span{[v.sub.1], [u.sub.m]}, respectively.

V. As Step III of Algorithm A'.

Algorithms A' and Algorithm B' may be refined by replacing the interval [0, [pi]] by a smaller one around the perpendicular direction of the slope of the boundary at the closest point to the point under investigation and by choosing a fine mesh in that interval.

6. Discussion and examples. We observe that Algorithms A' and B' are particular forms of a super-algorithm in which the compression of T to the space spanned by eigenvectors associated with the largest and smallest eigenvalues of the Hermitian matrices [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are considered for some convenient a. In Algorithms A', we have [alpha] = [pi] and in Algorithm B', [alpha] = [[theta].sub.2], as the largest eigenvalue of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the smallest eigenvalue of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [[theta].sub.k+1] = [[theta].sub.k] + [[theta].sub.2]. However, other values of [alpha] may be more appropriate according to the particular case taken into consideration.

Our algorithms are deterministic in the sense that they determine a solution provided the considered discretization of the interval [0, [pi]] is fine enough. A lack of determinism occurs if the boundary of F (T) contains line segments (also calledflat portions), which happens when the extreme eigenvalues of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for a certain k have multiplicities greater than one. For a given point in F(T), the algorithms will always determine a generating vector which may only depend on the chosen mesh.

EXAMPLE 6.1. Our first example features a 10 x 10 matrix generated by the MATLAB command T = randn(10) + (3 + 3i)ones(10). The obtained results are summarized in Table 6.1 and in Figure 6.1 illustrating the advantage of Algorithm B', which requires a smaller value of m to solve the problem than Algorithm A' does without an implementation of the refinement procedure.

EXAMPLE 6.2. The 45 x 45 complex matrix whose field of values is depicted in Figure 6.2 is defined as A = B + (-3 + 5i)ones(45), where B = F + iM (cf. Example 3.1). Our results for the test point [mu] = -200 + 500i are summarized in Table 6.2 and compared with those for the algorithms of [2, 3]. In Figure 6.2, the advantage of Algorithms A' and B', which solve the problem with a smaller number of eigenvectors and eigenvalues analysed than compared to the approaches in [2, 3], is illustrated.

We have also replaced the interval [0, [pi]] of step I in Algorithm B' by [[alpha], [alpha] + [pi]] for [alpha] = [+ or -][pi]/4, [+ or -][pi]/2, [+ or -]3[pi]/4. We found that the performance of the algorithm is only slightly affected by this replacement confirming that the good quality of the performance is not accidental.

Example 6.3. In our last example, the 188 x 188 Jordan block J for the eigenvalue 1 + 3i (with all co-diagonals equal to 1) and the test point [mu] = 1.707 + i3.707, which lies inside F(J) within a distance of [10.sup.-5] to the boundary, is considered. As it is well known, in this case F (J) is a circular disc centered at 1 + 3i. Since the fields of values of the two-dimensional compressions of this high dimensional Jordan block are elliptical discs with a huge eccentricity, the solution of the iFOV problem requires a very careful consideration of the relevant range of [[theta].sub.k] in order to find an elliptical disc which contains the point [mu].

We have considered the refinement of Algorithms A' with m = 2 and B' with m = 3 and theinterval arctan(3.707-3/1.707-1), arctan(3707-3/1.707-1) + 0.05[pi]]. The results are summarized in Table 6.3. The performance of Algorithms A' and B' compares very well with the ones of the approaches in [2, 3].

Our codes have been tested on a PC with an Intel Core 2 Duo T5550 (1.83 GHz) processor and 3 GB random-access memory (RAM).

Acknowledgments. Valuable suggestions of the editor Prof. Froilan M. Dopico and of the anonymous referees are gratefully acknowledged. In particular, we wish to thank Prof. Dopico for bringing [12] to our attention and one of the referees for providing a useful code.

REFERENCES

[1] N. BEBIANO AND I. SPITKOVSKY, Numerical ranges of Toeplitz operators with matrix, Linear Algebra Appl., 436 (2012), pp. 1721-1726.

[2] R. CARDEN, A simple algorithmfor the inverse field of values, Inverse Problems, 25 (2009), 115019, (9 pages).

[3] C. CHORIANOPOULOS, P. PSARRAKOS, AND F. UHLIG, A method for the inverse numerical range problem, Electron. J. Linear Algebra, 20 (2010), pp. 198-206.

[4] C. DAVIS, The Toeplitz-Hausdorff theorem explained, Canad. Math. Bull., 14 (1971), pp. 245-246.

[5] K. E. GUSTAFSON and D. K. M. RAO, Numerical range, the Field of Values of Linear Operators and Matrices. Springer, New York, 1997.

[6] F. HAUSDORFF, Der Wertvorrat einer Bilinearform, Math. Z., 3 (1919), pp. 314-316.

[7] R. A. HORN and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1991.

[8] C. R. JOHNSON, Numerical determination of the field of values of a general complex matrix, SIAM J. Numer. Anal., 15 (1978), pp. 595-602.

[9] M. MARCUS AND C. PESCE, Computer generated numerical ranges and some resulting theorems, Linear and Multilinear Algebra, 20 (1987), pp. 121-157.

[10] G. MEURANT, The computation of isotropic vectors, Numer. Algorithms, 60 (2012), pp. 193-204.

[11] O.TOEPLITZ, Das algebraische Analogon zu einem Satze vonFejer, Math. Z., 2 (1918), pp. 187-197.

[12] F. UHLIG, The field of values of a complex matrix, an explicit description of the 2 x 2 case, SIAM J. Algebraic Discrete Methods, 6 (1985), pp. 541-545.

[13] --, An inverse field of values problem, Inverse Problems, 24 (2008), 055019, (19 pages).

[14] --, Faster and more accurate computation of the field of values boundary for n by n matrices, Linear and Multilinear Algebra, in press, doi:10.1080/03081087.2013.779269.

NATALIA BEBIANO ([dagger]), JOAO DA PROVIDENCIA ([double dagger]), ANA NATA ([section]), AND JOAO P. DA PROVIDENCIA ([paragraph])

* Received May 17, 2013. Accepted November 28, 2013. Published online on February 28, 2014. Recommended by F. Dopico.

([dagger]) CMUC, University of Coimbra, Department of Mathematics, P 3001-454 Coimbra, Portugal (bebiano@mat.uc.pt).

([double dagger]) University of Coimbra, Department of Physics, P 3004-516 Coimbra, Portugal (providencia@teor.fis.uc.pt).

([section]) CMUC, Polytechnic Institute of Tomar, Department of Mathematics, P 2300-313 Tomar, Portugal (anata@ipt.pt).

([paragraph]) Depatamento de Fisica, University of Beira Interior, P 6201-001 Covilha, Portugal (joaodaprovidencia@daad-alumni.de).

(1) The respective MATLAB programs are available at the website http://www.mat.uc.pt/~bebiano.

Table 3.1

Performance of Johnson's algorithm, Algorithm A, and Algorithm B for
the 500 x 500 matrix of Example 3.1.

               m           area            acc.     seconds
                                          digits

Johnson's     7     8.6260 x [10.sup.9]     0      5.082322
algorithm     14    9.0890 x [10.sup.9]     1      7.229055
              28    9.2140 x [10.sup.9]     2      14.252341
              56    9.2439 x [10.sup.9]     2      24.169221
              112   9.2513 x [10.sup.9]     3      47.456900

Algorithm A   7     9.1064 x [10.sup.9]     1      3.363152
              14    9.2060 x [10.sup.9]     2      6.152282
              28    9.2383 x [10.sup.9]     2      11.847686
              56    9.2487 x [10.sup.9]     2      23.233351
              112   9.2514 x [10.sup.9]     3      46.187566

Algorithm B   7     9.2406 x [10.sup.9]     2      3.424524
              14    9.2509 x [10.sup.9]     3      6.330416
              28    9.2528 x [10.sup.9]     3      12.108766
              56    9.2533 x [10.sup.9]     4      23.853617
              112   9.2534 x [10.sup.9]     4      47.086406

Table 6.1

Performance of Algorithms A', B', and invnrp from [3] for Example 6.1.

algorithm         m    seconds    eigenanalyses   error

[mu] = 23.5 + i20

invnrp from [3]   --   0.027332        11         3.5527 x [10.sup.-15]
Algorithm A'      41   0.043476        33         7.9441 x [10.sup.-15]
Algorithm B'      2    0.036121         2         1.4648 x [10.sup.-14]

[mu] = 23.7 + i20

invnrp from [3]   --   0.026650        12         3.5527 x [10.sup.-15]
Algorithm A'      41   0.042243        33         1.0658 x [10.sup.-14]
Algorithm B'      5    0.050956         5         5.0243 x [10.sup.-15]

[mu] = 23.7289639701427 + i20

invnrp from [3]   --   0.026945        12         7.1054 x [10.sup.-15]
Algorithm A'      41   0.043584        33         7.1054 x [10.sup.-15]
Algorithm B'      5    0.050975         5         3.5527 x [10.sup.-15]

Table 6.2

Performance of Algorithms A', B', inversefov [2], and invnrp [3], for
Example 6.2.

algorithm         m    seconds    eigenanalyses           error

inversefov from   --   0.204922         3         2.2737 x [10.sup.-13]
  [2]
invnrp from [3]   --   0.028962         3         1.6078 x [10.sup.-13]
Algorithm A'      2    0.028287         1         1.1369 x [10.sup.-13]
Algorithm B'      2    0.037529         2         3.4224 x [10.sup.-13]

Table 6.3

Performance of 'Algorithms A', B', inversefov [2], and invnrp [3] for
Example 6.3.

algorithm         m    seconds    eigenanalyses   error

inversefov        --   0.478268   3               2.0948 x [10.sup.-15]
  from [2]
invnrp from [3]   --   0.261283   3               4.9651 x [10.sup.-16]
Algorithm A'      2    0.276974   1               1.6012 x [10.sup.-15]
Algorithm B'      3    0.395581   2               2.2204 x [10.sup.-16]
COPYRIGHT 2014 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bebiano, Natalia; Da Providencia, Joao; Nata, Ana; Da Providencia, Joao P.
Publication:Electronic Transactions on Numerical Analysis
Article Type:Formula
Geographic Code:1USA
Date:Aug 1, 2014
Words:6110
Previous Article:Special volume: dedicated to Lothar Reichel on the occasion of his sixtieth birthday.
Next Article:Large-scale dual regularized total least squares.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters