Printer Friendly

Reasoning about Grover's Quantum Search Algorithm Using Probabilistic wp.

1. INTRODUCTION

Quantum computers are a proposed means of using quantum-mechanical effects to achieve efficient computation. Quantum-mechanical systems may be in superpositions of several different states simultaneously. The central idea of quantum computers is to perform operations on these superposed states simultaneously and thus achieve a form of parallel computation. These devices were proposed in the early 1980's [Benioff 1980; Deutsch 1985].

One essential phenomenon of quantum mechanics is that the measurement of a superposed system forces it into a single classical state. Each superposed state is present with a certain amplitude, and an observation causes it to collapse to that state with a probability that depends on its amplitude. This means, that, although many computations may be performed in parallel on a quantum device, the result of only one of these may be observed. This may seem like a severe limitation, but several ingenious algorithms have been devised which work by increasing the amplitude of the desired outcome before any observation is performed and thus increasing the likelihood of the observed outcome being the desired one.

One such algorithm is Grover's quantum search algorithm [Grover 1997], which performs a search on an unstructured search space of size N in ??([square root of N]) steps. To find the desired search value with 100% probability in such a space, a classical computer cannot do better than a linear time search. Grover's algorithm performs operations on a superposition of all possible search values that serve to increase the amplitude of the desired search value. Grover shows that within ??([square root of N]) steps there is a greater than 50% chance of finding the desired search value. Boyer et al. [1998] proved a stronger result for the algorithm showing that the correct search value can be found in ??([square root of N]) with almost 100% probability.

In this article, we apply the probabilistic weakest-precondition (wp) calculus of Morgan et al. [1996] to Grover's algorithm to redevelop the result of Boyer et al. [1998] in a more systematic way. The probabilistic wp-calculus is an extension of Dijkstra's standard wp-calculus [Dijkstra 1976] developed for reasoning about the correctness of imperative programs. The extension supports reasoning about programs containing probabilistic choice. The measurement of a quantum superposition is an example of a probabilistic choice.

Use of the probabilistic wp-calculus contributes two essential ingredients to the analysis of quantum algorithms. Firstly it provides an elegant and rigorous notation for describing quantum algorithms. The existing literature uses block diagrams and structured English which can be cumbersome and potentially ambiguous. Secondly, the probabilistic wp-calculus provides a set of rules for the systematic analysis of the correctness of algorithms. In the case of standard algorithms, the calculus is used to determine whether a program achieves some desired outcome. In the case of probabilistic algorithms, the calculus is used to reason about the probability of a program achieving some desired outcome.

This article is not simply about re-presenting a known result about Grover's algorithm; it also aims to demonstrate that the probabilistic wp-calculus is suitable for both modeling and reasoning about a quantum algorithm. Boyer et al. [1998] have already derived the same result that we derive here, but they do so in a less systematic way. Our hope is that the approach used here could be applied fruitfully to other quantum algorithms and will aid the development of new quantum algorithms.

The article is organized as follows. In Section 2, we give a sufficient overview of quantum theory. In Section 3, we present our approach to modeling quantum computation using the programming language of the probabilistic wp-calculus. In Section 4, we present Grover's algorithm using the approach of Section 3. In Section 5, we give a sufficient introduction to the rules of the probabilistic wp-calculus, and in Section 6, we use the wp-calculus to derive a formula for the probability of success of Grover's algorithm.

2. QUANTUM SYSTEMS AND QUBITS

In quantum mechanics, a superposition of two states A and B may be represented in Dirac's notation as follows:

S = [Alpha]|A> + [Beta]|B>.

System S is said to be in a superposition of A and B. A and B are the basis states, and a and are amplitudes. The amplitudes may be complex numbers.

Let [[parallel]z[parallel].sup.2] be the square norm(1) of complex number z. Observation of S will cause the system to collapse to state |A> with probability [[parallel][Alpha][parallel].sup.2] and to |B> with probability [[parallel][Beta][parallel].sup.2]. The probabilities must sum to 1:

[[parallel][Alpha][parallel].sup.2] + [[parallel][Beta] [parallel]].sup.2] = 1.

A qubit is a two-state quantum system in which the basis states are labeled 0 and 1:

S = [Alpha]|0> + [Beta]|1>.

A classical bit has [Alpha] = 1 and [Beta] = 0 or [Alpha] = 0 and [Beta] = 1.

A qubit evolves from one superposition to another using a quantum gate (or function) F:

F([Alpha]|0> + [Beta]|1> = [Alpha]'|0> + [Beta]'|1>.

F must be unitary which means that

--1-summing of probabilities is preserved: [[parallel][Alpha]' [parallel].sup.2] + [[parallel][Beta]'[parallel].sup.2] = 1 = [[parallel][Alpha][parallel].sup.2] + [[parallel][Beta] [parallel].sup.2], and

--F has an inverse.

In quantum mechanics, a transformation F is usually modeled using matrix multiplication:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [U.sub.F] is a 2 x 2 unitary matrix. Matrix U is unitary if [U.U.sup.[dagger]] = [U.sup.[dagger]].U = I where [U.sup.[dagger]] is the conjugate transpose of U. It can be shown that such a transformation defined by unitary matrix [U.sub.F] is unitary [Bernstein and Vazirani 1993].

A quantum superposition may have an arbitrary number of basis states, not just two. An N-state superposition is represented as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Observation of S will cause it to collapse to state |i> with probability [[parallel][Alpha][parallel].sup.2] Again, the probabilities must sum to 1:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

A quantum register is a collection of qubits, and an L-qubit register gives rise to a system with [2.sup.L] basis states. Like qubits, quantum registers evolve under unitary transformations.

For further details on quantum computation, the reader is referred to papers such as Berthiaume [1996] and Deutsch and Ekert [1998].

3. MODELING QUANTUM COMPUTERS

A quantum computer is a collection of quantum registers and quantum gates. In this section, we introduce ways of modeling various aspects of quantum computation using the programming language of the probabilistic wp-calculus. We use a subset of the language which includes standard assignment, probabilistic assignment, sequential composition, and simple loops.

Firstly, we model an N-state quantum system as a function from state indices to complex numbers: S: (0..N-1) [right arrow] C.

A superposition of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

is modeled by the function S where for 0 [is less than or equal to] i [is less than] N:

S(i) = [[Alpha].sub.i].

A classical state i is modeled by the function which is zero everywhere except at i, which we write as |i>:

|i>(j) = 1, if i=j = 0, otherwise.

Transformation of a quantum state is modeled by a standard assignment statement:

S := F(S).

F must be unitary for this to be a valid quantum transformation.

We shall find it convenient to use lambda abstraction to represent transformations: ([Lambda]i | 0 [is less than or equal to] i [is less than] N [multiplied by] E) represents the function that takes an argument i in the range 0..N - 1 and returns the value E. For example, the unitary transformation that inverts the amplitude of each basis state is modeled as follows:

S := ([Lambda]i | 0 [is less than or equal to] i [is less than] N [multiplied by] -S(i)).

Sequencing of transformations is modeled using sequential composition: if [T.sub.1] and [T.sub.2] are transformations, then their sequential composition is written [T.sub.1];[T.sub.2].

The loop which iterates C times over a transformation T is written do C times T od.

We model the observation of a quantum system using a probabilistic assignment statement. In the simple case, this is a statement of the form

x := E @ p, F @ (1 - p).

This says that x takes the value E with probability p and the value F with probability 1 - p. For example, a coin flip is modeled by

coin := head @ 0.5, tail @ 0.5.

Observation of a two-state superposition forces the system into a classical state. This is modeled with the following probabilistic assignment:

S := |0> @ [[parallel]S(0)[parallel].sup.2], |1> @ [[parallel]S(1)[parallel].sup.2].

A generalized probabilistic statement has the form

x := [E.sub.i] @ [p.sub.i], 0 [is less than or equal to] i [is less than] N,

where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Now observation of an N-state quantum system S may be modeled by

S := |i> @ [[parallel]S(i)[parallel].sup.2], 0 [is less than or equal to] i [is less than] N.

That is, S collapses to the classical state i with probability [[parallel]S(i)[parallel].sup.2].

4. GROVER'S SEARCH ALGORITHM

The Grover search problem may be stated as follows:

Given a function f : (0..N-1) [right arrow] {0,1} that is zero everywhere except for one argument [x.sub.0], where f([x.sub.0]) = 1, find that argument [x.sub.0]

The algorithm makes use of the mean of a superposition S, written S, where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The algorithm is represented in the programming language of the probabilistic wp-calculus in Figure 1. The initialization of this algorithm sets the system S up in an equal superposition of all possible basis states. Details of how this distribution is achieved may be found in Grover [1997]. Successive iterations of the loop then serve to increase the amplitude of the search argument [x.sub.0] while decreasing the amplitude of the other arguments. To see why this is so, consider Figure 2, which illustrates the evolution of the superposition during one loop iteration for the case of N = 8. The initialization sets S up in an equal superposition of the eight possible states, represented diagrammatically in (a). The first step of the loop body replaces each S(i) with S(i) - 2.f(i).S(i). This inverts S(i) about the origin in the case that f(i) = 1 and leaves S(i) unchanged in the case that f(i) = 0. Assuming that f(4) = 1, this replaces superposition (a) with superposition (b). The second step of the loop body inverts each amplitude about the average of all the amplitudes resulting in superposition (c). The amplitude of state |4> has increased as a result of the two steps of the loop body, while the amplitude of the others has decreased.

Fig. 1. Grover's search algorithm.

Init [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] Init

do C times

Body S := ([Lambda]i | 0 [is less than or equal to] i [is less than] N [multiplied by] S(i) - 2.f(i).S(i));

S := ([Lambda]i | 0 [is less than or equal to] i [is less than] N [multiplied by] 2.[bar]S - S(i))

od;

Measure S := |i> @ [[parallel]S(i)[parallel].sup.2], 0 [is less than or equal to] i [is less than] N.

[Figure 2 ILLUSTRATION OMITTED]

After an optimum number of iterations, C, the amplitude of |[x.sub.0]> approaches 1 while the amplitude of the other states approaches 0. An observation is then performed. Since the amplitude of |[x.sub.0]> approaches 1, the probability of the observation yielding |[x.sub.0]> is close to 1. C depends on the number of states N, and, as discussed in Section 6, it is ??([square root of N]).

5. PROBABILISTIC WP

In two-valued logic, a predicate may be modeled as a function from some state space to the set {0, 1}. For example, the predicate x [is greater than] y evaluates to 1 in a state in which x is greater than y and evaluates to 0 in any other state. A probabilistic predicate generalizes the range to the continuous space between 0 and 1 [Kozen 1985; Morgan et al. 1996]. For example, the probabilistic predicate 0.5 x (x [is greater than] y) evaluates to 0.5 in a state in which x is greater than y and evaluates to 0 in any other state.

In the standard wp-calculus, the semantics of imperative programs is given using weakest-precondition formulae: for program prog and postcondition post, wp(prog, post) represents the weakest precondition (or maximal set of initial states) from which prog is guaranteed to terminate and result in a state satisfying post. The wp rule for assignment is given by

(1) wp(x := E, post) = post[x := E].

Here, post [x := E] represents predicate post with all free occurrences of x replaced by E. For example, we have

wp(x :=7, x [is greater than] y) = x [is greater than] y [x :=7] = 7 [is greater than] y.

That is, the assignment x := 7 is guaranteed to establish x [is greater than] y provided 7 [is greater than] y initially. The wp rule for sequential composition is given by

(2) wp(prog1;prog2, post) = wp(prog1, wp(prog2, post)).

Both (1) and (2) also apply in the probabilistic wp-calculus. The wp rule for simple probabilistic assignment is given by

(3) wp( x := E @ p, F @ 1 - p, post ) = p x post[x := E] + (1 - p) x post[x := F].

In the case of nonprobabilistic post, wp(prog, post) represents the probability that program prog establishes post. For example

wp( coin := head @ 0.5, tail @ 0.5, coin = head ) = by (3)

0.5 x (coin = head [coin := head]) + 0.5 x (coin = head [coin := tail]) = substitution

0.5 x (head = head) + 0.5 x (tail = head) = 0.5x1 + 0.5x0 = 0.5.

That is, a coin flip establishes coin = head with probability 0.5. The wp rule for the generalized probabilistic assignment is given by

(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Our wp-rules for probabilistic assignment are based on Morgan et al. [1996].

The only other programming construct we need in order to model Grover's algorithm is the DO-loop. Since the algorithm only loops a constant and finite number of times, we can model do C times prog od as a finite sequential composition of C copies of prog which we write as [prog.sup.C]. We have that

(5) [prog.sup.0] = skip

(6) [prog.sup.i+1] = prog; [prog.sup.i].

Here, skip is the statement that does nothing, with wp(skip, post) = post. The semantics of more general looping constructs is given by least fixed points in the usual way [Kozen 1985; Morgan et al. 1996], but we do not need that here.

6. REASONING ABOUT GROVER

The postcondition we are interested in for the Grover algorithm is that the correct solution is found, i.e., S = |[x.sub.0]>. The probability that Grover establishes S = |[x.sub.0]>) is given by wp(Grover, S = |[x.sub.0]>), so we shall calculate this.

The Grover algorithm has the following structure:
Init;
do C times
   Body
od;
Measure.


And we shorten it to

Init; [Body.sup.C]; Measure.

When calculating a formula of the form wp(prog1; prog2, post), we first calculate wp(prog2, post) and then apply wp(prog1, _) to the result of this. Thus, to calculate wp(Grover, S = |[x.sub.0]>), we first calculate wp(Measure, S = |[x.sub.0]>):

(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Next we calculate wp( [Body.sup.C], [[parallel]S([x.sub.0])[parallel].sup.2]). [Body.sup.C] is defined recursively by (5) and (6), so we shall develop recursive equations for wp([Body.sup.C], [[parallel]S([x.sub.0])[parallel].sup.2]). First we look at the weakest precondition of a single iteration. Let P[S] stand for a predicate P containing one or more free occurrences of variable S, and let P[T] stand for P with all free occurrences of S replaced by T. We calculate wp(Body , P[S]) as follows (omitting the constraint 0 [is less than or equal to] i [is less than] N):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Now, straightforward calculation shows that

[bar]S' = [bar]S - 2.S([x.sub.0])/N

2.[bar]S' - S'(i) = 2.[bar]S - 4/N.S([x.sub.0]) + (2.f(i) - 1).S(i).

Hence, we get

(8) wp( Body , P[S] ) = P[S"]

where S"(i) = 2.[bar]S - 4/N.S([x.sub.0]) + (2.f(i) - 1).S(i).

From (8), we have that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Now this has the form [[parallel] A.[bar]S + B.S([x.sub.0]) [parallel].sup.2], and using (8) we can again show, that, for any values A, B, we have

(9) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This recurring structure suggests that we define [A.sub.i] and [B.sub.i] as follows:

(10) [A.sub.i+1] = [A.sub.i] + 2.[B.sub.i]

(11) [B.sub.i+1] = N.[B.sub.i] - 2.[A.sub.i] - 4.[B.sub.i]/N,

to give

(12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

By induction over j, we get

(13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We take [A.sub.0] = 0 and [B.sub.0] = 1 and apply [Body.sup.C] to (7) as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Finally, we apply the initialization to this:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Thus we have shown that

wp( Grover, S = |[x.sub.0]>) = [[parallel] [A.sub.C] + [B.sub.C] [parallel].sup.2]/N.

That is, the probability, P(C, N), of observing the correct value after C iterations is

P(C, N) = [[parallel] [A.sub.C] + [B.sub.C] [parallel].sup.2]/N

By applying standard mathematical analysis techniques to the formula [A.sub.C] + [B.sub.C], we can derive the following closed form for P(C, N):

P(C, N) = [sin.sup.2]((2.C + 1).[[Theta].sub.N])

where [[Theta].sub.N] = arcsin 1/[square root of N].

This is the same as the formula presented in Boyer et al. [1998]. The derivation of this closed form is outlined in the appendix.

It is interesting to note that P(C, N) is periodic in C. This can be seen clearly in Figure 3 which graphs P(C, N) against C for N = 128. Here, an optimum probability of success is reached after eight iterations, where P(8,128) = 0.996. After eight iterations, the probability starts to decrease again. The reason for the decrease is that, after eight iterations, the average immediately after the inversion about the origin operation goes below zero.

[Figure 3 ILLUSTRATION OMITTED]

We wish to determine the optimum number of iterations for a given N. P reaches a maximum (and a minimum) for a given N when

d/dxP(x, N) = 0.

It is easy to show that the first maximum for a given N is reached at H(N) where

H(N) = [Pi]/4.[[Theta].sub.N] - 1/2 and [[Theta].sub.N] = arcsin 1/[square root of N].

Thus the number of iterations in the Grover algorithm for a search space of size N should be the closest whole number to H(N). H(N) is graphed in Figure 4. Since arcsin x is ??(x) [Abramowitz and Stegun 1964, 4.4.40], we have that [[Theta].sub.N] is ??(1/[square root of N]) and hence that H(N) is ??([square root of N]).

[Figure 4 ILLUSTRATION OMITTED]

7. CONCLUSIONS

We have shown how Grover's search algorithm may be represented in the programming notation of the probabilistic wp-calculus. Any quantum computation consists of unitary transformations and probabilistic measurement, and these can be modeled in this notation. Thus any quantum algorithm may be modeled in the notation. We believe that this language provides a more rigorous and elegant means of describing quantum algorithms than is normally used in the literature.

We have also shown how the rules of the probabilistic wp-calculus may be used to derive a recursive formula for the probability that Grover's algorithm finds the required solution. Using standard mathematical techniques, we were then able to find a closed form for this probability which corresponds to the formula presented in Boyer et al. [1998]. The wp-calculus provides a clear and systematic means of stating the required outcome and of deriving the probability of achieving it.

It is instructive to compare the reasoning approach used here with the approach used in Grover [1997] and Boyer et al. [1998]. When reasoning using the probabilistic wp-calculus, we reasoned backward from the expected outcome with a single arithmetic formula [[parallel] [A.sub.i].[bar]S + [B.sub.i].S([x.sub.0]) [parallel].sup.2]. In contrast, Grover [1997] and Boyer et al. [1998] worked in a forward manner reasoning about how a superposition, i.e., a vector of amplitudes, evolves from an initial distribution to a final distribution. Thus we only need to work with a single formula whereas they need to work with a vector of formulae(2) which can be more cumbersome. A similar observation is made in Kozen [1985] and Morgan et al. [1996] where it is shown that, in general, wp-style probabilistic reasoning involves working backward with an expectation, whereas more conventional reasoning involves working forward with a probability distribution.

In the case of Grover, we were able to derive an exact probability for success because the algorithm iterates a fixed number of times. Some algorithms iterate until some condition is met rather than a fixed number of times. One such example is a generalization of Grover's search algorithm presented in Boyer et al. [1998] which deals with the situation where there are an unknown number of values x satisfying f(x) = 1. In a case like this, we need to find the expected number of iterations rather than the probability of success. Such cases would require the specialized loop rules of Morgan [1996].

An important feature of the calculus of Morgan et al. [1996] not found in Kozen [1985] and not used in this article is the modeling of demonic nondeterminism. This is crucial in standard programming calculi to the notion of program specification and refinement as exemplified by Morgan [1994], and it may be possible to apply such techniques to the derivation of quantum algorithms.

ACKNOWLEDGMENTS

Thanks to Tony Hey and other members of the Quantum Sticky Bun Club for inspiration and to Peter Hoyer and the anonymous referees for valuable comments on the article.

(1) The square norm of any complex number a + bi is [a.sup.2] + [b.sup.2].

(2) In fact, Grover [1997] and Boyer et al. [1998] simplify the reasoning by working with a pair of formulae, one representing the amplitude of |[x.sub.0]> and one representing the amplitude of the other N - 1 states, rather than N formulae. However, this simplification may not be possible for all quantum algorithms.

REFERENCES

ABRAMOWITZ, M. AND STEGUN, I. A. 1964. Handbook of mathematical functions. Number 55 in Applied Mathematics Series. U. S. Dept. of Commerce, National Bureau of Standards.

BENIOFF, P. 1980. The computer as a physical system: a microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. Journal of Statistical Physics 22, 563-591.

BERNSTEIN, E. AND VAZIRANI, U. 1993. Quantum complexity theory. In 25th ACM Annual Symposium on Theory of Computing. 11-20.

BERTHIAUME, A. 1996. Quantum computation. In Complexity Theory Retrospective II. Springer-Verlag. Available from http://andre.cs.depaul.edu/Andre/publicat.htm.

BOYER, M., BRASSARD, G., HOYER, P., AND TAPP, A. 1998. Tight bounds on quantum searching. Fortschritte Der Physik 46, 4-5, 493-505. Available from http://xxx.lanl.gov/abs/quant-ph/9605034.

DEUTSCH, D. 1985. Quantum theory, the Church-Turing principle and the universal quantum computer. In Royal Society London A 400. 96-117.

DEUTSCH, D. AND EKERT, A. 1998. Quantum computation. Physics World 11, 3, 47-52. Available from http://www.qubit.org/.

DIJKSTRA, E. 1976. A Discipline of Programming. Prentice-Hall.

GROVER, L. 1997. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters 79, 2, 325-328.

KNUTH, D. 1973. The Art of Computer Programming - Volume 1: Fundamental Algorithms, second ed. Addison-Wesley, Reading, Mass.

KOZEN, D. 1985. A probabilistic PDL. J. Comput. Syst. Sci. 30, 162-178.

MORGAN, C. 1994. Programming from Specifications, Second ed. Prentice-Hall.

MORGAN, C. 1996. Proof rules for probabilistic loops. In BCS-FACS Refinement Workshop, H. Jifeng, J. Cooke, and P. Wallis, Eds. Electronic Workshops in Computing. Springer-Verlag. Available from http://www.comlab.ox.ac.uk/oucl/publications/tr/TR-25-95.html.

MORGAN, C., MCIVER, A., AND SEIDEL, K. 1996. Probabilistic predicate transformers. ACM Trans. Program. Lang. Syst. 18, 3, 325-353.

Received September 1998; accepted December 1998

APPENDIX

We outline the derivation of the closed-form expression for the probability of success of Grover's algorithm. The probability P(C, N) is expressed in terms of the series [A.sub.i] and [B.sub.i], which in turn are defined by the recurrence equations (10) and (11). To find a closed form for these recurrences, we first compute the generating functions [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for [A.sub.i] and [B.sub.i] respectively using basic techniques [Knuth 1973, Sect. 1.2.9]:

[G.sub.A](z) = 2Nz/N + 2(2 - N)z + N[z.sup.2]

[G.sub.B](z) = N - N(z)/N + 2(2 - N)z + N[z.sup.2].

Computing the probability involves the sum [A.sub.i] + [B.sub.i], and it seems reasonable to examine the Taylor series expansion of the sum of the two generating functions. Assume that z is such that the Taylor series expansion of [G.sub.A](z)+ [G.sub.B](z) converges:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We now observe that there is a strong similarity between the coefficients and powers of N in the enumerators above and the coefficients and powers of sin([Theta]) in the multiple angle formula for sin(n[Theta]):

sin(1[Theta]) = 1 sin([Theta])

sin(3[Theta]) = 3 sin([Theta]) - 4 [sin.sup.3] ([Theta])

sin(5[Theta]) = 5 sin([Theta]) - 20 [sin.sup.3]([Theta]) + 16 [sin.sup.5]([Theta])

sin(7[Theta]) = 7 sin([Theta]) - 56 [sin.sup.3]([Theta]) + 112 [sin.sup.5]([Theta]) - 64 [sin.sup.7]([Theta]).

(Recall that [sin.sup.i][Theta] stands for [(sin[Theta]).sup.i] rather than repeated application of sin.) This similarity suggests that we express N in the form [sin.sup.-2] ([Theta]) since, for example,

5[N.sup.2] - 20N + 16/[N.sup.2] [N := [sin.sup.-2]([Theta])] = 5 - 20 [sin.sup.2](Theta]) + 16 [sin.sup.4]([Theta]).

We write [Theta] as [[Theta].sub.N] and choose it so that [sin.sup.-2]([[Theta].sub.N]) = N, i.e., [[Theta].sub.N] = arcsin 1/[square root of N].

Rewriting N as [sin.sup.-2]([[Theta].sub.N]) in [G.sub.A](z) + [G.sub.B](z) and recalculating the Taylor series expansion gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Note that sin [[Theta].sub.N] [is not equal to] 0 for N [is not equal to] 0. Since the Taylor series expansion has the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

we conclude that

[A.sub.i] + [B.sub.i] = sin((2i + 1)[[Theta].sub.N])/sin [[Theta].sub.N]

The probability of success P(C,N) is [[parallel] [A.sub.C] + [B.sub.C] [parallel].sup.2]/N. However, it is easy to see from Eqs. (10) and (11) that for any positive naturals C, N, [A.sub.C] + [B.sub.C] is real and not complex so that [[parallel] [A.sub.C] + [B.sub.C] [parallel].sup.2] = [([A.sub.C] + [B.sub.C]).sup.2]. We then obtain a closed form for the probability

P(N,C) = [[parallel] [A.sub.C] + [B.sub.C] [parallel].sup.2]/N = [([A.sub.C] + [B.sub.C]).sup.2]/N = [(sin((2C + 1)[[Theta].sub.N]).[sin.sup.-1][[Theta].sub.N])].sup.2]/N = [(sin((2c + 1)[[Theta].sub.N]). [square root of N]).sup.2]/N = [sin.sup.2]((2C + 1)[[Theta].sub.N]).

Authors' Address: Department of Electronics & Computer Science, University of Southampton, Southampton SO17 1BJ, United Kingdom; email:{mjb,phh}@ ecs.soton.ac.uk. Permission to make digital/hard copy of all or part of this material without fee for personal or classroom use provided that the copies are not made or distributed for profit or commercial advantage, the ACM copyright/server notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.

MICHAEL BUTLER and PIETER HARTEL University of Southampton
COPYRIGHT 1999 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1999 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:BUTLER, MICHAEL; HARTEL, PIETER
Publication:ACM Transactions on Programming Languages & Systems
Geographic Code:4EUUK
Date:May 1, 1999
Words:4850
Previous Article:Space-Efficient Scheduling of Nested Parallelism.
Next Article:Compile-Time Memory Reuse in Logic Programming Languages through Update in Place.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |