Printer Friendly

Quantum dynamical entropy and an algorithm by Gene Golub.

1. Introduction. Fundamental in the theory of classical non-linear dynamics is the socalled Kolmogorov-Sinai (K-S.) entropy, a quantity that, when positive, implies chaos [4]. One can safely say that it is the single most important piece of information that one can get on a dynamical system, and surely it is the single most employed word in all dynamics. Various techniques for its computation exists, and its link to Lyapunov exponents via Pesin's formula [13] makes it an effective tool. In quantum mechanics, on the other hand, a plurality of quantities can rightly claim to correspond to K-S. entropy, for they tend to this latter in the classical limit. None of these, though, is simply computable, nor a quantum analogue of Pesin's relation exists at the present moment. These difficulties stem ultimately from the basic fact that trajectories are not defined for quantum systems. In this paper, I study a version of "quantum entropy" due to Alicki and Fannes [1] that is based on the notion of coherent histories. It requires the computation of a complex Hermitean, large, non-negative full matrix [OMEGA] and of the trace of the matrix function -[OMEGA]log([OMEGA]). I develop computational schemes that render viable the numerical analysis of this quantity for systems of physical interest. In this endeavor, I rely on an algorithm developed by Bai, Fahey and Golub [5] to deal with large scale matrix computation problems.

The plan of this paper is the following: in the next section I introduce the matrix [OMEGA] under investigation, with a few words on its quantum mechanical origin that also help to understand the breadth and scope of its algorithmic requirements. Readers with deeper interests in dynamics will find reference to the original literature, while numerical analysts desiring to grasp the essence of the computational problem may just focus on the linear algebra nature of the equations.

In section 3 I study [OMEGA] and its symmetries. Then, in section 4, I derive a recursion relation for computing [OMEGA] at increasing values of an integer "time"M. This has been originally developed in [3]. A deeper analysis of its properties, performed in section 5, permits us to set up a parallel algorithm for the computation of [OMEGA] at different values of M. In section 6, this idea is implemented in two algorithms for the computation of the matrix-vector product [OMEGA]W. The first algorithm runs conveniently on parallel machines with distributed memory, the second minimizes the memory storage requirements to achieve the largest possible matrix size given the finite memory space available on any single computer. These algorithms are instrumental in setting up a Lanczos-Montecarlo technique for the computation of the trace of (-[OMEGA]log[OMEGA]) due to Golub, as discussed in section 7. Numerical results are presented in section 8 and an improvement of Golub's confidence interval estimation is presented in section 9. Finally, a Lanczos-Montecarlo technique for the direct calculation of the Jacobi matrix associated with the spectral measure of [OMEGA] is introduced and tested in section 10. The need for a numerical treatment of A-F entropy arises particularly in the field of quantum chaos and decoherence: the Conclusions briefly mention this problem and summarize the work.

2. Quantum dynamical entropy. Quantum evolution takes place in the Hilbert space H associated with a physical system [14]. While in most circumstances this space is infinite dimensional, we shall assume a finite dimensional reduction, of dimension N, without dealing here with the reduction problem. Therefore, we shall let [{[e.sub.n]}.sub.n] = 0, ... , N-1] be the canonical basis of H = [C.sup.N] and [(*, *).sub.H] be the usual scalar product in this space. Quantum evolution in H is effected stroboscopically by a unitary operator U: the "wave-vector" [psi] [member of] H specifying the state of the system at time t is mapped into U[psi], the state at time t + 1.

Although no specification of the nature of U other than it can be numerically computed is necessary here and although we shall present quite general algorithms, for sake of illustration we shall use in this paper two paradigmatic examples. The first is the operator U with matrix elements

(2.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

that corresponds to free classical motion on the one-dimensional torus [S.sub.1] = R/Z, a stable, completely integrable system.

The second example is the so-called quantum cat unitary evolution operator [U.sup.cat] = KU, where U has been defined in eq. (2.1) and K is the operator with matrix elements

(2.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where [[delta].sub.k,l] is the Kronecker delta. The operator [U.sup.cat] is the quantum mechanical version of the renown Arnol'd cat map of classical mechanics [4], a chaotic systems with positive K-S. entropy. For a derivation of this operator, its physical relevance and mostly the relations with FFT; see [7].

Clearly, given an initial state [psi] quantum evolution yields the components [[psi].sub.n](j):= [([e.sub.n], [U.sup.j][psi]).sub.H] at any future (or past) time j [member of] Z. According to standard usage, the probability that the quantum system is found in state n at time j is given by the square modulus of [[psi].sub.n](j). As in classical dynamics, "coarse graining" measurement can be effected, when the state vector [psi] is not analyzed in all its components, but only in groups of them. Formally, if [{[P.sub.k]}.sub.k=0, ...,L] is a family of projection operators, so that I = [[SIGMA].sub.k][P.sub.k], we can also "measure" the projection of [psi] on the range of [P.sub.k], that is, compute the scalar product [([psi], [P.sub.k][psi]).sub.H].

To make things easier, without renouncing anything essential, in this paper we shall consider two orthogonal projections [P.sub.0] and [P.sub.1], on half of the Hilbert space each, like in a head/tail experiment: take N = 2p and let

(2.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Given these premises, a "quantum history" of a vector [psi] is the result of effecting the unitary quantum evolution U preceded at each time by projection on either the head or the tail half of the Hilbert space: readers familiar with the double-slit experiment might think of the motion of a particle hitting at integer times a sequence of screens with two slits. According to common usage in symbolic dynamics, Greek letters denote sequence of symbols, like [sigma] = ([[sigma].sub.0], [[sigma].sub.1], ... , [[sigma].sub.M-1]). These latter are vectors of length M, and are also called "words" in symbolic dynamics. Greek letter with subscripts, like [[sigma].sub.i], are therefore the components of these vectors, i.e. symbols in the alphabet (in our choice, either zero or one). With this notation, the quantum history of the vector [psi] is

(2.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

For convenience of notation we shall put

(2.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The "amplitude" ([[psi].sub.[sigma]], [[psi].sub.[sigma]]) should be compared with the measure of the classical phase space with symbolic dynamics given by the "word" [sigma]. In both classical and quantum dynamics these probabilities add up to one: [[SIGMA].sub.[sigma]]([[psi].sub.[sigma]], [[psi].sub.[sigma]]) = 1. In quantum mechanics, though, interference reigns and the products ([[psi].sub.[sigma]], [[psi].sub.[sigma]]) are non-null also when [sigma] [not equal to] [sigma]'.

Complexity of the motion is quantified in classical dynamics by certain summations over non-null amplitudes of sequences [sigma], of course averaged with respect to the initial conditions. In the Alicki-Fannes (A-F) quantum formulation [3], entropy is derived by the spectrum of the decoherence matrix D with entries [D.sub.[sigma], [sigma]'] , defined by

(2.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the dagger indicates the adjoint and clearly [U.sup.([dagger])]= [U.sup.-1], [P.sup.([dagger]).sub.k] = [P.sub.k]. Observe that D is a [2.sup.M] x [2.sup.M] square matrix, Hermitian, of unit-trace and non-negative, so that the quantum A-F entropy associated with the unitary evolution U and the family of projections {[P.sub.k]} can be defined as

(2.7) S(U, {[P.sub.k]}) = Tr(-Dlog D).

Entropy is therefore the trace of the function of a matrix whose entries are themselves obtained by traces of product of operators over H. In addition, notice that in dynamics one is interested in the large-time behavior (here, large M): it is then clear that computation of S via eqs. (2.6),(2.7) is a formidable task, of exponentially increasing computational complexity. Yet, the structure revealed by eq. (2.6) permits a reduction of size independent of M.

In fact, observe that the right hand side of (2.6) can be seen as a scalar product, in the space of square matrices of size N, between the vectors [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. Then, the non-null eigenvalues in the spectrum of D coincide, with their multiplicities, with those of the operator [OMEGA] acting in K as:

(2.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the scalar product is taken in the new space of matrices, K. Full detail will be provided in the next section. Observe here that K has dimensionality [N.sup.2]: therefore, [OMEGA] has maximal rank smaller than that of D whenever M [greater than or equal to] 2 [log.sub.2] N, a condition that is easily realized. On the other hand, notice that a major physical problem requires the analysis of the "classical limit" of quantum mechanics, that in turn requires N also to be large [7]. We are really facing a challenging problem.

In this paper we study computational techniques to evaluate the A-F entropy

(2.9) S(U, {[P.sub.k]}) = Tr(-[OMEGA] log [OMEGA]).

In the next section we define precisely the metric in K and we study [OMEGA] and its symmetries.

3. The matrix [OMEGA]. In this section, we explore the symmetries of the matrix [OMEGA]. Recall first that [OMEGA] acts in K, the space of complex square matrices of size N. K is endowed with the scalar product

(3.1) [(V, W).sub.K] := Tr([V.sup.[dagger]]W).

[OMEGA] acts in K as

(3.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the summation ranges over all binary sequences of length M, [[sigma].sub.j] [member of] {0, 1}, for j = 0, ..., M - 1. In this equation we have set

(3.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

U is any unitary operator over [C.sup.N] (the dynamics) and the projection [P.sub.i], i = 0, 1 have been defined in eq. (2.3).

An orthonormal basis for K can be constructed as

(3.4) [E.sub.ij] := [e.sub.i][e.sup.T.sub.j], i, j = 1, ..., N.

Let us compute the matrix elements of [OMEGA] in this basis:

(3.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Traces are then explicitly written as:

(3.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where scalar products in H appear and where the asterisk denotes complex conjugation. Therefore, the matrix elements of [V.sup.[sigma]] are featured in the final formula for [OMEGA] that reads:

(3.7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Take now into account the product form of the operator [V.sup.[sigma]], eq. (3.3) and notice that [V.sup.[sigma]][e.sub.j2] is null unless [e.sub.j2] belongs to the range of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.], that is, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.], the set of indices corresponding to the [[sigma].sub.0]-th half of Hilbert space:

(3.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

For the same reason, [k.sub.2] must belong to the same set, and therefore the matrix [OMEGA] is the direct sum of two square matrices of maximal rank p = N/2, half of the original. We can therefore consider only one of these at a time, when computing the A-F entropy, eq. (2.9), that is additive over the two subspaces. To fix the notation, in the following we shall let implicitly [[sigma].sub.0] = 0, and [j.sub.2], [k.sub.2] [member of] [I.sub.0], the other case being trivially obtained. Also, with abuse of notation, but without danger of confusion, from now on K will denote the space spanned by [E.sub.i,j], i = 0, ..., N - 1 and j = 0, ..., p - 1 (recall that N = 2p).

Finally, inspection of eq. (3.7) also reveals the symmetry

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3.9)

so that [OMEGA] is a Hermitian operator in K.

4. Time dependence of [OMEGA]. We have noticed at the end of section 2 that the dimension of K does not depend on M, the "time", that is, the length of the symbolic word [sigma]. Yet [OMEGA] obviously does, so that from now on we will indicate this dependence explicitly as a superscript: [[OMEGA].sup.M] will be the [OMEGA] matrix at "time" M. Now, summation over all words of length M in eq. (3.7) might lead us to believe that we are still facing an exponentially growing computational cost. For these reasons, it is important to examine in detail the "time" dependence of the problem.

Not to overburden the notation, since the scalar products of this section will all be in the space H, we are allowed to drop the relative subscript. Let us start from the computation of [V.sup.[sigma].sub.j1j2]. The vector/word [sigma] can be written as ([[sigma].sup.'], [[sigma].sub.M-1]), where [[sigma].sup.'] is now a vector of length M - 1. Accordingly, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. Inserting an identity into the definition of [V.sup.[sigma].sub.j1j2] we get

(4.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

A quite similar equation holds for ([e.sub.k1], [V.sup.[sigma]][e.sub.k2]). Using these facts in eq. (3.7) we get

(4.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Summations in this equation range from 1 to N for the indices [j.sub.3] and [k.sub.3], and on 0 and 1 for the variables [[sigma].sub.i]. In intuitive terms, one can explain this formula in the words of a knowledgeable reviewer: the left-hand side is the overlap between two "world-lines" (or trajectories) of time-extent M, whose end-points are ([j.sub.1], [j.sub.2]) and ([k.sub.1], [k.sub.2]). It can be expressed as a sum over all possible ancestors at time (M - 1), each with their respective overlap and time-evolution.

Equation (4.2) is the basis of a recursive technique initialized by letting M = 1 in eq. (3.7),

(4.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

When implemented directly [3], this technique requires a storage of the order of [N.sup.4]/4 complex quantities, while the computation of the A-F entropy

(4.4) S(U, {[P.sub.k]}, M) = Tr(-[[OMEGA].sup.M] log [[OMEGA].sup.M])

calls for the diagonalization of a square matrix of size [N.sup.2]/2. Needless to say, it becomes soon impossible to meet these requirements as N grows. It is the purpose of the present paper to show how they can be significantly eased. In the next section we analyze the nature of the recurrence, in order to tackle the memory requirement first. We then face the entropy computation problem.

5. Partitioning the [[OMEGA].sup.M] matrix. It is now fundamental to examine in closer detail the nature of the recurrence in eq. (4.2). First of all, although somehow evocative, it does not mean that [[OMEGA].sup.M] is the M-th power of [[OMEGA].sup.1], seen as an operator over K. Nonetheless, eq. (4.2) implies a remarkable property. In fact, observe that the indices [j.sub.2] and [k.sub.2] appear unchanged at 1.h.s. and r.h.s. Therefore, we define [[OMEGA].sup.M]([j.sub.2], [k.sub.2]) as the square matrix of size N, with indices [j.sub.1], [k.sub.1] and entries [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. What we have just observed can be formalized by saying that:

LEMMA 5.1 (Sectorization). The sector matrix [[OMEGA].sup.M]([j.sub.2], [k.sub.2]) can be computed recursively according to eqs. (4.2),(4.3).

This lemma is of paramount importance in the numerical implementation:

LEMMA 5.2 (Storage and Computational Complexity).

i) The sector matrix [[OMEGA].sup.M]([j.sub.2], [k.sub.2]) can be computed with 6M [N.sup.4] floating point operations (f.p.o's). Its computation requires a storage of 3[N.sup.2] complex entries (c.e's).

ii) The full matrix [[OMEGA].sup.M] can be computed with 3M([N.sup.6]/4 + [N.sup.5]/2) f.p.o's and stored in an array of [N.sup.4]/8 + [N.sup.5]/4 c.e's. It can be computed sector by sector with a storage of 3[N.sup.2] complex entries (c.e's).

Proof. Matrix iteration (4.2) requires 6[N.sup.2] f.p.o's for each entry of [[OMEGA]M([j.sub.2], [k.sub.2]), so that the full ([j.sub.2], [k.sub.2]) sector can be computed with 6[N.sup.4] f.p.o's. Although there are [p.sup.2] = [N.sup.2]/4 different sectors in [[OMEGA].sup.M], the symmetry property, eq. (3.9), implies that only p(p + 1)/2 = [N.sup.2]/8+N/4 of themare fully independent and can be obtained choosing [j.sub.2] [greater than or equal to] [k.sub.2]. (Additional redundancy in the diagonal sectors, that is, those with [j.sub.2] = [k.sub.2] exists but will not be exploited, because it only provides sub-leading improvements in both computation time and storage size parameters).

The two matrices [V.sup.[sigma]] can be conjunctly stored in a square N by N matrix with complex entries. In fact, we have seen that [V.sup.[sigma]][e.sub.j2] is null unless [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]. In addition, the matrix iteration (4.2) requires only two memory arrays of size [N.sup.2].

REMARK 5.3. Notice that the previous lemmas unveil the possibility of parting the computation of [[OMEGA].sup.M] over parallel machines with distributed memory.

6. Computing the matrix-vector product [[OMEGA].sup.M]W. Of particular relevance for the Lanczos technique that we shall outline in section 7 is the computation of the matrix-vector product [[OMEGA].sup.M]W, where W is a vector in K. When not necessary, in this section we shall drop the index M. The heart of our technique is an application of Lemma 5.1.

Algorithm 1: Computation of the ([j.sub.2], [k.sub.2]) sector of the product [bar.W] = [OMEGA]W.

* for [j.sub.1] = 1, ..., N

- compute the sector product [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

- accumulate into the result vector [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

* end

Algorithm 1 requires a storage of [N.sup.2] c.e's for W and [bar.W], and [N.sup.2] c.e's for the sector of the matrix [[OMEGA].sup.M]. Computational complexity is 2[N.sup.2] f.p.o's.

Algorithm 2: Computation of the matrix-vector product [bar.W] = [[OMEGA]W.

* for [j.sub.2] = 1, ... , N/2 and [k.sub.2] = 1, ... , N/2, [j.sub.2] [less than or equal to] [k.sub.2]

- execute Algorithm 1 and the conjugate Algorithm 1' stemming from the symmetry property (3.9)

- accumulate the results into the vector [[bar.W].sub.j1,j2]

* end

It is readily seen that if we want to use Algorithm 2 (a serial application of Algorithm 1) for the computation of the full matrix-vector product [OMEGA]W, the computational complexity is of [N.sup.4]/2 + [N.sup.3] f.p.o's. In addition, we need to store the [N.sup.2]/8 +N/4 independent sectors of [OMEGA] action that requires a full storage of [N.sup.4]/8 + [N.sup.3]/4 c.e's: this is a serious memory limitation that may limit significantly the Hilbert dimension N that can be numerically simulated. We have devised two procedures to overcome this limitation.

Algorithm 3: Computation of the matrix-vector product [bar.W] = [OMEGA]W on parallel computers with distributed memory and P processors [[pi].sub.1], ... , [[pi].sub.P].

1. order the set of labels ([j.sub.2], [k.sub.2]) with [j.sub.2] [greater than or equal to] [k.sub.2], and distribute them among the P processors as equally as possible

2. transmit the input vector W to all processors

3. in each processor [[pi].sub.1]

* for each pair ([j.sub.2], [k.sub.2]) assigned to processor [[pi].sub.1]

- execute Algorithm 1 and the conjugate Algorithm 1' stemming from the symmetry property (3.9)

- accumulate the results into the vector [[bar.W].sup.l.sub.j1,j2]

* end

4. accumulate the vectors [[bar.W].sup.l.sub.j1,j2] produced in all processors [[pi].sub.l], l = 1, ... , P, into the result vector [[bar.W].sub.j1,j2].

Memory requirement of Algorithm 3 is [N.sup.4]/8P + [N.sup.3]/4P + b[N.sup.2] c.e's on each processor and computational complexity is [N.sup.4]/2 + [N.sup.3] f.p.o's, that can be executed in a real time proportional to ([N.sup.4]/2 + [N.sup.3])/P seconds. We assume that the sectors of [[OMEGA].sup.M] had been previously computed and stored in each processor. Notice that processors communication--usually, a time-demanding operation--is limited to steps 2 and 4 and consists of the total transmission of [N.sup.2] c.e's. This is also the sole significant transmission also when computing the matrix [[OMEGA].sup.M]: Lemma 5.1 is actually the statement of a parallelization property.

On the other hand, with or without parallel computers, one can drastically diminish the memory requirement, at the expense of increasing computation time:

Algorithm 4: Computation of the matrix-vector product [bar.W] = [[OMEGA].sup.M]W with minimal storage requirement.

* for each label ([j.sub.2], [k.sub.2]) with [j.sub.2] [greater than or equal to] [k.sub.2]

- compute the sector matrix [[OMEGA].sup.M]([j.sub.2], [k.sub.2])

- execute Algorithm 1 and the conjugate Algorithm 1' stemming from the symmetry property (3.9)

- accumulate the results into the vector [[bar.W].sub.j1,j2]

* end

Algorithm 4 attains the minimal memory requirement of 6[N.sup.2] c.e's. Computation time is augmented by the need of computing the matrix [[OMEGA].sup.M], that brings the total computational complexity to grow like 3/4M[N.sup.6] + 3/2M[N.sup.5]. This may become significant in the Lanczos algorithm that we shall describe in the next section.

7. Computation of the entropy: The algorithm of Golub et al. Computation of the spectrum of [OMEGA] by full-matrix techniques is not viable as N grows. Yet, we are interested not so much in the spectrum per se, as in the entropy function (4.4). Therefore, in view of the results of sections 5 and 6, the Lanczos' technique developed by Golub et al. [5] becomes an interesting possibility. In this section, we sketch the details that permit the application of Golub's algorithms 1 and 2 of reference [5] without major modifications. We shall refer to them as G1 and G2.

In essence, G1 is based on the construction of the tridiagonal representation of [OMEGA] in the Krylov basis [9] associated with a randomly chosen initial vector [W.sub.i] [member of] K. For complex [OMEGA] differing from Golub's case, we choose the entries of W in the set of complex numbers {[+ or -]1, [+ or -]i} with equal probability [6]. [W.sub.i] is then conveniently normalized. [OMEGA] is Hermitian and its tridiagonal representation is real.

This tridiagonal matrix is then easily diagonalized, and the entropy

[S.sub.i] := ([W.sub.i], -[OMEGA]log([OMEGA])[W.sub.i])

is estimated via Gaussian summation. In addition, since the spectrum of [OMEGA] is contained in the set [0, 1], Gauss-Radau formulae can also be employed. Since f(x) = -x log(x) is the integrand for the computation of the entropy, it is readily found that the derivatives of f satisfy the relations [f.sup.(2n)](x) < 0 and [f.sup.(2n+1)](x) > 0 for all x > 0 and n [less than or equal to] 1, so that Gaussian summation and Gauss-Radau with a prescribed eigenvalue at x = 1 both provide upper bounds to the quantity [([W.sub.i], f([OMEGA])[W.sub.i]).sub.K], while Gauss-Radau with prescribed eigenvalue at zero yields a lower bound. In the following, we shall indicate with [S.sup.l.sub.i] the lower bound obtained with the Gauss-Radau formula with prescribed eigenvalue at zero, and with [S.sup.u.sub.i] the upper bound obtained by the usual Gauss formula.

The Monte-Carlo algorithm G2 consists in taking a statistical average over a large number of realizations of the random vector [W.sub.i], i = 1, ... , I, of the values provided by the algorithm G1. The predicted value for S is then the mean of the average upper and lower bounds, and a confidence interval is derived on the basis of Hoeffding's inequality. We shall come back to this statistical estimate later.

We ran algorithm G2 endowed with algorithm 3 of the previous section for matrix-vector multiplication on a cluster of parallel computers with MPI communication protocol. The dimension of the Hilbert space was N = [2.sup.7] (corresponding to [[OMEGA].sup.M] of size [2.sup.13]), M ranged from 1 to 30, the dimension of the Jacobi matrix was six (seven for Gauss-Radau) and the number of trial vectors, I, was 1000. In this paper, we show data obtained with 14 processors, also to underline the fact that our algorithm is not bound to work with a number of processors equal to a power of two. Figure 7.1 displays the time [T.sub.[OMEGA]] (in real seconds) spent by each processor in the matrix computation part, eq. (4.2), in each iteration from M to M + 1 and the time [T.sub.L] required by the Lanczos algorithm, again at each iteration.

8. The algorithm of Golub et al.: Results. We can now start by showing results obtained for the quantum cat evolution [U.sup.cat]. Figure 8.1 displays a sample of I = 1000 realizations of the algorithm G1 with M = 11 and N = [2.sup.7] and six Gaussian points. Upper values [S.sup.u.sub.i] and lower values [S.sup.l.sub.i] are almost coincident for the same sample item (the largest difference [delta] := max{S.sup.u.sub.i] - [S.sup.l.sub.i] , i = 1, ..., I} is about 3.7 [10.sup.-3]), while different samples feature a much larger range, of amplitude 0.59917. In keeping with Golub's notation this value is [U.sub.max] - [L.sub.min], where [U.sub.max] := max{[S.sup.u.sub.i], i = 1, ..., I} and [L.sub.min] := min{[S.sup.l.sub.i], i = 1, ..., I}

In Table 8.1 we report these data for M ranging from 1 to 14. According to algorithm G2, it is then possible to extract from this table statistical estimates of S(U, {[P.sub.k]}, M). Before doing that, though, we observe that [delta] is always several orders of magnitude smaller that [U.sub.max] - [L.sub.min]. Moreover, we want to further analyse the sample data [S.sup.u.sub.i], or [S.sup.l.sub.i] for that matter.

9. Experimental statistical analysis and improved probabilistic bounds. We have observed at the end of the previous section that the sample variations [U.sub.max] - [L.sub.min] are orders of magnitude larger than the upper-lower bound differences and indeed we have reasons to believe this to be the general case. Then, it is not worth spending computer time to compute the Gauss-Radau formula: we shall now consider uniquely the Gaussian summation.

[FIGURE 7.1 OMITTED]

Accordingly, our final formula for the entropy will be

(9.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[FIGURE 8.1 OMITTED]

where [S.sup.u.sub.i] indicates the Gaussian summation result for [S.sub.i] := ([W.sub.i], -[[OMEGA].sup.M] log([[OMEGA].sup.M])[W.sub.i]). We now turn to the problem of deriving a confidence interval for S(U, {[P.sub.k]}, M).

Clearly, [S.sup.u.sub.i] is a realization of a random variable of mean [mu] and finite variance [[eta].sup.2]: therefore, the sample average S is itself a random variable, with the same mean, and standard deviation [eta]/[square rooy of (I)]. In addition, because of the central limit theorem, the distribution of S tends to the normal, when I tends to infinity, and we might think of using this fact to improve the Hoeffding's bounds.

In the case at hand, moreover, the individual sample values [S.sub.i] appears to be approximately normally distributed, the more so the larger the value of M: this is apparent in Figure 9.1, where we compare the Gaussian distribution function F(z) := 1/2 (erf(z) + 1) with the experimental distribution functions F of the standardized random variables z := ([S.sub.i] - [mu])/[eta]. All quantities (including [mu] and [eta]) are estimated from the I = 1000 samples of the previous section, and various values of M are reported.

Therefore, we can safely assume that the sample means S are normally distributed to high precision, and we can derive a confidence interval of probability p according to the standard values [z.sub.(1+p)/2] of common usage in statistics:

(9.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Confidence intervals derived via Hoeffding's inequality have the same form (9.2), where [Z.sup.G.sub.p ] = [eta][z.sub.(1+p)/2] is replaced by [Z.sup.H.sub.p]:

(9.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

the usual Chebyshev inequality yields in turn eq. (9.2) with [Z.sup.T.sub.p] given by

(9.4) [Z.sup.T.sub.p] := [eta]/[square root of (1 - p)]

[FIGURE 9.1 OMITTED]

In Table 9.1 we report the [Z.sub.p] values with p = 0.99 for the same case of Table 8.1. We observe that while Chebyshev's and Hoeffding's inequalities give comparable results (the former being indeed better than the latter in most cases) the normal estimate is superior (that is, narrower) by a factor of about four at this value of p. In terms of computer time, this means a most significant reduction of a factor of 16 in I, the number of Lanczos' evaluations needed to attain the same accuracy.

10. The Jacobi matrix of the spectral measure of [OMEGA]. In this section we propose an alternative to the algorithm G2 just presented and utilized. According to the latter, a Jacobi matrix is computed for each random vector [W.sub.i], i = 1, ..., I. We want now to compute a single Jacobi matrix for the estimation of the entropy function S(U, {[P.sub.k]},M). In this section, [OMEGA] will be a shorthand notation for [[OMEGA].sup.M].

Technically, the Jacobi matrices computed in the algorithm G2 correspond to the spectral measures [v.sup.i] defined as follows: let [[PSI].sub.j] be the eigenvectors of [OMEGA] and [[lambda].sub.j] the associated eigenvalues. For any x [member of] R let [[delta].sub.x] be the atomic measure at x. Then, [v.sup.i] is the measure

(10.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

In physics, this measure is called the "local density of states". Entropy, in turn, is the integral of the function f(x) = -x log(x) with respect to v, the spectral measure of [OMEGA] called the "density of states" in the physics literature:

(10.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

It is therefore the Jacobi matrix of v that we need to compute.

To do this, recall that [OMEGA] is an operator from K to itself. Introduce the linear space L of such operators, endowed with a scalar product just as done in K: for any [THETA], [XI] [member of] L

(10.3) [([THETA], [XI]).sub.L] := Tr([[THETA].sup.([dagger])][XI]),

where obviously the trace is taken in the new space: for any [THETA] [member of] L

(10.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

being [E.sub.k,l], k = 0, ..., N - 1, l = 0, ..., N/2 - 1 the basis vectors of K, given by eq. (3.4) and by the remark at the end of section 3.

Define the sequence of polynomials [p.sub.n]([OMEGA]) of degree n in L initialized by [p.sub.-1]([OMEGA]) = 0, [p.sub.0]([OMEGA]) = I (0 and I being the null operator and the identity in L) that satisfy the three-term relation

(10.5) [OMEGA][p.sub.n]([OMEGA]) = [b.sub.n+1][p.sub.n+1]([OMEGA]) + [a.sub.n][p.sub.n]([OMEGA]) + [b.sub.n][p.sub.n-1]([OMEGA]),

with real coefficients [a.sub.n], [b.sub.n] [greater than or equal to] 0, n = 0, ... . The definition is unique if we enforce that these polynomials (each of which is an operator in L) be orthogonal with respect to the scalar product (10.3) and normalized for n > 0:

(10.6) ([p.sub.n]([OMEGA]), [p.sub.m]([OMEGA]))L = [[delta].sub.n,m]

Of course, these are not the orthogonal polynomials of a measure over L: as a matter of fact, the coefficients [a.sub.n] and [b.sub.n] do depend on [OMEGA]. Yet, they serve our scope:

LEMMA 10.1. The coefficients [a.sub.n] and [b.sub.n], n = 0, ... , are the entries of the Jacobi matrix of the measure v associated with the Hermitian matrix [OMEGA].

Proof. We start by observing that

[integral] dv = [N.sup.2]/2 = [([p.sub.0]([OMEGA]), [p.sub.0] ([OMEGA])).sub.L] = Tr(I) = [v.sub.0],

and that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

[v.sub.0] and [v.sub.1] are the first two moments of v. Furthermore, it is easily seen that the coefficients [a.sub.n] and [b.sub.n] are constructed as

(10.7) [a.sub.n] = [([p.sub.n]([OMEGA]), [OMEGA][p.sub.n]([OMEGA])).sub.L] = Tr([p.sub.n][([OMEGA]).sup.([dagger])][OMEGA][p.sub.n]([OMEGA])),

and

(10.8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and that Tr(g([OMEGA])) = [[SIGMA].sub.j] g([[lambda].sub.j]) = [integral] g(x)dv(x) for any continuous function g. Therefore, [p.sub.n](x) computed via eq. (10.5) with x in place of [OMEGA] and the coefficients [a.sub.n] and [b.sub.n] derived as above, is the n-th orthogonal polynomial of v.

A Lanczos algorithm for the computation of this Jacobi matrix follows directly from the previous lemma and can be easily set up, at least in principle. Yet, this algorithm faces two main computational difficulties. Firstly, it requires computation of the traces (10.7) and (10.8). Secondly, it requires the storage of three Lanczos vectors [p.sub.n+1]([OMEGA]), [p.sub.n]([OMEGA]), [p.sub.n-1]([OMEGA]), each of which of size [N.sup.4]/4.

The first difficulty can be overcome by the same statistical idea applied in G2: rather than computing traces as summations over all the [N.sup.2]/2 vectors [E.sub.k,l], choose a fixed set of random vectors {[W.sub.i], i = 1, ... , I}, and estimate the trace in eq. (10.3) as

(10.9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The second difficulty can be avoided by noticing that in so doing computation of the traces (10.7) and (10.8) only requires the vectors [p.sub.n]([OMEGA])[W.sub.i], that can be obtained via repeated actions of the matrix [OMEGA] in the recursion relation (10.5). Therefore, the resulting algorithm ends up to be a variation of G2:

Algorithm 5: Lanczos-Monte Carlo computation of the Jacobi matrix of v.

* set [p.sub.0] = I, [b.sub.0] = 0

* for j = 0, ... , J

1. set a = 0

- for i = 1, ... , I

* generate the random vector [W.sub.i] and the three-term sequence [p.sub.n][W.sub.i] for n = 0, ... , j, as well as the vector [OMEGA][p.sub.j][W.sub.i]

* compute the scalar product [([p.sub.j][W.sub.i], [OMEGA][p.sub.j] [W.sub.i]).sub.K]

* accumulate the result into the variable a

- end

2. set [a.sub.j] = a/I

3. set b = 0

- for i = 1, ... , I

* generate the random vector [W.sub.i] and the three-term sequence [p.sub.n][W.sub.i] for n = 0, ... , j, as well as the vector [X.sub.i] = ([OMEGA] - [a.sub.j])[p.sub.j][W.sub.i] - [b.sub.j][p.sub.j-1][W.sub.i]

* compute the scalar product [([X.sub.i], [X.sub.i]).sub.K]

* accumulate the results into the variable b

- end

4. set [b.sub.j+1] = [square root of (b/I)]

* end

Of course, all the usual precautions to be observed in Lanczos algorithms apply here, like early terminations for small b at step 4. Once the Jacobi matrix of v has been obtained, it can be easily diagonalized, and Gaussian integration performed. The advantage of this technique is that diagonalization is performed only once, and it can be effected at every value of J. In Figure 10.1 we show the estimated entropy versus M and J. We notice that good results can be obtained already at J = 2. Computation complexity can improve upon that of G2 if the vectors [p.sub.j]([OMEGA])[W.sub.i] can be stored in memory for three consecutive values of j.

[FIGURE 10.1 OMITTED]

11. Conclusions. Quantum chaos, two examples. We can now finally enjoy the display of the A-F entropies S(U, {[P.sub.k]}, M) versus M as confidence intervals for the two paradigmatic examples introduced in section 2. Notice however that plots displayed refer to the contribution of the [[sigma].sub.0] = 0 sector of the matrix (see eq. (3.8) and the related discussion), that turns out to correspond exactly to one half of the total value. Figure 11.1 displays, versus M, the p = 0.99 confidence intervals that, thanks to the normal estimates of section 9, are smaller than symbol size. In the former case, the cat map, we observe a linear initial increase of S(U, {[P.sub.k]},M), followed by saturation to the maximum attainable value [S.sup.max] = log(N). This termination is induced mathematically from the finiteness of the Hilbert space of the system, and physically by the effect of quantum interference. In the second case, a sublinear increase, also with a saturation, is observed.

Physical analysis takes off from this point [7, 3]. In the renown problem of quantum classical correspondence, this logarithmic time barrier [8] could be beaten following the prescriptions of decoherence [10]: the present work aims at developing analytical and numerical techniques to address this problem rigorously. It is our conviction that the numerical techniques presented in this paper will open the way to investigations up to now impossible with conventional algorithms [2].

In conclusion, I have presented in this paper a series of techniques that will render viable the computation of the A.F. entropy for systems of physical interest. These techniques rest on the algorithmic properties of the [[OMEGA].sup.M] matrices introduced in the original work [3] and here systematically investigated, and on a blend of parallel computing algorithms and the Lanczos' technique of Golub. In addition, I have shown how the normal property of the distribution of statistical samples permits to largely improve the statistical bounds provided in [5], allowing a significant reduction in computer time. Finally, an algorithm for the direct computation of the Jacobi matrix associated to the spectral measure of [[OMEGA].sup.M] has been presented. Its performance, in comparison with the previous algorithms, will be the object of further investigation. Outside the present problem, this last algorithm might have relevance in the study of singular continuous measures and of their Fourier transforms [11, 12].

[FIGURE 11.1 OMITTED]

Acknowledgments. This research benefitted from discussions and advice by Gene Golub. The author warmly remembers his enthusiasm, dedication and kindness. The author also wishes to thank Luca Paredi for invaluable assistance in taming a reluctant processor cluster.

* Received February 28, 2007. Accepted for publication January 15, 2008. Recommended byM. Gutknecht. Supported by PRIN 2005 Transport properties of classical and quantum systems. Computation time has been provided by CINECA under the INFM-CNISM grant Open quantum systems: transport and decoherence.

REFERENCES

[1] R. ALICKI AND M. FANNES, Defining quantum dynamical entropy , Lett. Math. Phys., 32 (1994), pp. 75-82.

[2] R. ALICKI, A. LOZINSKI, P. PAKONSKI, AND K. ZYCZKOWSKI, Quantum dynamical entropy and decoherence rate, J. Phys. A: Math. Gen., 37 (2004), pp. 5157-5172.

[3] R. ALICKI, D. MAKOWIEC, AND W. MIKLASZEWSKI, Quantum chaos in terms of entropy for a periodically kicked top, Phys. Rev. Lett., 77 (1996), pp. 838-841.

[4] V. I. ARNOL'D AND A. AVEZ, Ergodic Problems of Classical Mechanics, W. A. Benjamin, Inc., New York, 1968.

[5] Z. BAI, M. FAHEY AND G. H. GOLUB, Some large-scale matrix computation problems, J. Comput. Appl. Math., 74 (1996), pp. 71-89.

[6] S. DONG AND K. LIU, Stochastic estimations with Z2 noise, Phys. Lett. B, 328 (1994), pp. 130-136.

[7] J. FORD, G. MANTICA AND G. RISTOW, The Arnol'd cat: Failure of the correspondence principle, Phys. D, 50 (1991), pp. 493-520.

[8] J. FORD AND G. MANTICA, Does quantum mechanics obey the correspondence principle? Is it complete? Am. J. Phys., 60 (1992) pp. 1086-1098.

[9] G. H. GOLUB AND G. MEURANT, Matrices, moments and quadrature, Tech. Report SCCM-93-07, Stanford University, 1993.

[10] A. R. KOLOVSKY, Quantum coherence, evolution of the Wigner function, and transition from quantum to classical dynamics, Chaos, 6 (1996), pp. 534-542.

[11] G.MANTICA, Fourier-Bessel functions of singular continuous measures and their many asymptotics, Electron. Trans. Numer. Anal., 25 (2006), pp. 409-430. http://etna.math.kent.edu/vol.25.2006/pp409-430.dir/pp409-430.html

[12] G. MANTICA Fourier transforms of orthogonal polynomials of singular continuous spectral measures, in Applications and computation of orthogonal polynomials (Oberwolfach, 1998), Internat. Ser. Numer. Math., 131, Birkhauser, Basel, 1999, pp. 153-163.

[13] YA. B. PESIN, Characteristic Lyapunov exponents, and smooth ergodic theory, Russian Math. Surveys, 32 (1977), pp. 55-114.

[14] M. C. REED AND B. SIMON, Methods of Modern Mathematical Physics II. Fourier Analysis, Self-Adjointness, Academic Press, New York, 1975.

GIORGIO MANTICA ([dagger])

([dagger]) Center for Non-linear and Complex Systems, I.N.F.N. and C.N.I.S.M., Universita dell'Insubria, Via Valleggio 11, 22100 Como, Italy (giorgio@uninsubria.it).
TABLE 8.1

Statistical data extracted from I = 1000 samples of the quantum cat
evolution with N = [2.sup.7] and six Gaussian points. Symbols are
defined in the text.

 [U.sub.max]--
M [delta] [U.sub.max] [L.sub.min] [L.sub.min]

 1 0.365320e-03 0.128128e-01 0.364302e+01 0.363021e+01
 2 0.925000e-04 0.674307e-01 0.347163e+01 0.340420e+01
 3 0.522670e-03 0.297215e+00 0.315047e+01 0.285326e+01
 4 0.107000e-04 0.806697e+00 0.360842e+01 0.280172e+01
 5 0.960000e-05 0.111395e+01 0.377458e+01 0.266063e+01
 6 0.720000e-05 0.168792e+01 0.335986e+01 0.167194e+01
 7 0.620000e-05 0.202631e+01 0.364741e+01 0.162110e+01
 8 0.690000e-05 0.243792e+01 0.373945e+01 0.130153e+01
 9 0.730000e-05 0.305548e+01 0.397821e+01 0.922735e+00
10 0.997000e-04 0.347071e+01 0.415136e+01 0.680642e+00
11 0.137570e-02 0.379457e+01 0.439375e+01 0.599170e+00
12 0.593880e-02 0.416238e+01 0.458430e+01 0.421922e+00
13 0.375400e-02 0.446022e+01 0.474224e+01 0.282027e+00
14 0.323500e-03 0.457285e+01 0.483109e+01 0.258243e+00

TABLE 9.1

Confidence values [Z.sub.p] with p = 0.99 for the same case of Table
8.1.

M [Z.sup.H.sub.p] [Z.sup.T.sub.p] [Z.sup.G.sub.p]

 1 0.590861e+01 0.458289e+01 0.106781e+01
 2 0.554075e+01 0.535940e+01 0.124874e+01
 3 0.464403e+01 0.470522e+01 0.109632e+01
 4 0.456015e+01 0.451669e+01 0.105239e+01
 5 0.433051e+01 0.366327e+01 0.853543e+00
 6 0.272129e+01 0.296183e+01 0.690107e+00
 7 0.263854e+01 0.250148e+01 0.582845e+00
 8 0.211840e+01 0.195064e+01 0.454500e+00
 9 0.150187e+01 0.153929e+01 0.358653e+00
10 0.110783e+01 0.117137e+01 0.272929e+00
11 0.975224e+00 0.893929e+00 0.208285e+00
12 0.686731e+00 0.660134e+00 0.153811e+00
13 0.459033e+00 0.457901e+00 0.106691e+00
14 0.420323e+00 0.336077e+00 0.783060e-01
COPYRIGHT 2007 Institute of Computational Mathematics
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Mantica, Giorgio
Publication:Electronic Transactions on Numerical Analysis
Date:Aug 1, 2007
Words:7598
Previous Article:Implementing an interior point method for linear programs on a CPU-GPU system.
Next Article:A generalization of the steepest descent method for matrix functions.


Related Articles
The dynamical Yang-Baxter equation, representation theory, and quantum integrable systems.
Algebraic and Topological Dynamics; proceedings.
Entropy-driven parameter control for evolutionary algorithms.
Toward a new interpretation of subatomic particles and their motion inside a-temporal physical space.
Nonequilibrium quantum field theory.
Dynamical symmetry of the Kepler-Coulomb problem in classical and quantum mechanics; non-relativistic and relativistic.

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters