Printer Friendly

A PROPOSED MEASURE FOR PSI-INDUCED BUNCHING OF RANDOMLY SPACED EVENTS.

ABSTRACT: In a psi experiment with a long series of trials, evidence for psi may be found not only in an increased number of hits but also in a nonrandom distribution of the hits over the series of trials. The author discusses one such possible nonrandom pattern, which appears as an anomalous bunching or clustering of the hits. A corresponding bunching measure is defined, and an expectation value and variance for this measure are calculated.

Consider an experimental setting in which the participant observes events that happen at random time intervals. One wants to evaluate not only the frequency of the appearing events but also a possible clustering or bunching of the events. A typical test situation is realized by a computer that (acting like a k-sided die) generates random numbers in the range from 1 to k at regular intervals and produces a signal--an event--whenever a 1 comes up. One could also have a setting in which participants try to increase or decrease the counting rate of signals from an ideal Geiger counter, or one could have a test in which the participant guesses a long sequence of Zener cards, with the hits representing the events.

The primary evaluation of such tests may be based on the total number of events observed; however, one may also look for patterns that could give additional evidence of psi. Frequently tested patterns are, for example, a decline effect, a U-curve effect (when best scoring is observed at the start and the end of a session), or an anomalously high variance. The possible pattern I discuss in this article is a bunching or clustering of the events along the time axis. To some extent, such clustering can be captured by counting the numbers of nonevents between two subsequent events and looking for the distribution of the nonevent string lengths, but then there is much arbitrariness in combining the string counts into one final measure. Also, by evaluating only the distances between next-neighbor events one may lose important information.

I propose here a new measure that evaluates the bunching in a more global manner. To explain the basic idea let us first look at a case of clustering in space rather than time.

Let a number N of marbles be randomly distributed over a linear arrangement of M slots with the restriction that there cannot be more than one marble per slot (see Figure 1).

Imagine for a moment that the marbles repel each other with an exponentially declining force so that the potential energy between a marble in slot I and a marble in slot J is proportional to

[e.sup.-[absolute val. of I-J]R]

where the constant R determines the range of the force.

If, then, n(1),... n(N) are the slot numbers for the N marbles, the total potential energy (Pot) of the assembly can be written as a sum of N(N-1)/2 terms corresponding to all possible marble pairs:

Pot = [[[sigma].sup.I,J=1,...N].sub.I[less than]J] [e.sup.-[absolute val. of n(I)-n(J)]/R] (1)

This expression, Pot, has its minimal value when the points are equidistant and a maximum when the points are all clustered together. One can now forget about the original picture of Pot as a potential energy and simply take the expression in Equation 1 as a measure for a possible bunching effect. If one is interested in small-scale bunching effects it seems natural to assume an Requal to the average distance between adjacent marbles, R = M/N. However, one can also capture bunching at a larger scale by choosing an R = SM/N, with a factor 5[greater than]1.

Using an exponentially declining potential as the basis for the bunching measure is somewhat arbitrary. The choice, however, is mathematically particularly simple, so that one can explicitly calculate the expectation value, the variance, and the higher moments of the distribution for Pot.

RANDOMLY TIMED EVENTS

Imagine an electronic or mechanical random number generator that, acting as a k-sided die, produces the numbers 1, 2, . . . k with equal probabilities. Let the generator be activated at a regular step rate, and let the system generate an event whenever the generator produces a 1.

Consider an experiment in which the generator is activated M times (for M steps), and assume that a total of N signals were generated, at the step numbers n(1), n(2),... n(N) with

n(1)[less than]n(2)[less than]n(2)....[less than]n(N)[less than or equal to]M. (2)

If one plots the M steps as M slots along the time axis and marks all slots that hold an event, one has a picture similar to Figure 1, and one can define a measure for the bunching in time in analogy to Equation 1.

Let us pause briefly to verify that, in the absence of psi, the events generated by our random number generator (RNG) are really as randomly distributed along the time axis as the randomly distributed marbles in Figure 1.

Beginning from any step, the probability that the next event will occur after n steps is

P(n) = p/q * [q.sup.n]

With

p = 1/k, q = 1 - p, for = 1,2,.... (3)

Then the probability for a particular event sequence (Equation 2) to occur can be written as

Pr[n(1), n(2),...n(N)] =

p(n(1)) * p(n(2) - n(1)) * P (n(3) - n(2))... (4)

p(n(N)-n(N - 1)) * [q.sup.M-n(N)],

where the last factor results from the requirement that after the Nth event there happen no further events. From Equations 3 and 4 we get

Pr[n(1),n(2),...n(N)] = [p.sup.N] [q.sup,M-N], (5)

Note that this probability is independent of the particular values n(I). This means that if an experiment has produced, as a result of chance, a certain number N of events, then all possible sets of values n(1), n(2),... n(N) consistent with Equation 2 are equally likely, or the n(1) are randomly distributed over the available Mslots.

BUNCHING IN TIME

In analogy to Equation 1, one can now define for the events at n(1), n(2),... n(N) a measure for their bunching in time. If one is interested in clustering over short periods, one can choose R = M/N, equal to the average delay between successive events, and for a measure of clustering over larger times one can set R = SM/N with a factor S[greater than]1. Then we can write

Pot = [[[sigma].sup.I,J=1...N].sub.I[less than]J] [r.sup.[absolute val. of n(I)-n(J)]], (6)

with

r = [e.sup.-N/M] for short-range bunching (7a)

or, more generally,

r = [e.sup.N/M-S] with S [greater than or equal to] 1. (7b)

In assessing the significance of an experiment one first evaluates the total number N of events with the familiar methods. Next, as an independent measure of a possible psi effect, one calculates the expectation value and the variance of Pot for the specific observed N value. Then one obtains the actual value of Potform Equation 6 and can decide whether Pot differs significantly from its expectation value.

For the expectation value and the variance

Pot and [[sigma].sup.2] = [Pot.sup.2] - [(Pot).sup.2],

the calculation in Appendix A gives (see Equations A11, A12, A32, and A32a)

Pot = N(N-1) (N - 1)/M(M - 1) * r/1-r * (M - 1/1-r) (8)

and

4[[sigma].sup.2] = [N(N - 1)(N - 2)(N - 3)/M(M -1)(M - 2)(M - 3) - [N.sup.2] * [(N - 1).sup.2]/[M.sup.2] * [(M - 1).sup.2)]] * S[(r).sup.2] + 2. N(N - 1)(N - 2)(N - 3)/M(M - 1)(M - 2) (M - 3) * [S[(r).sup.2] - 2G(r)] + 4 * N(N - 1)(N - 2)/ M(M - 1)(M - 2) * [G(r)S[(r).sup.2] -- S[(r).sup.2] + 2. N(N - 1)/M(M - 1) * S[(r).sup.2] (9)

with

S(r) = 2r/1 - r (M - 1\1 - r], G(r) = [(r/1 - r).sup.2] (4M + 2/1 - [r.sup.2])- 8/1 - r).

In the limit of large values for Nand M Equation 9 simplifies to

[[sigma].sup.2] = [r.sup.2]/1 - [r.sup.2] * N/M * [(1 - N/M).sup.2] * N. (9a)

These results can also cover the case of radioactive decay, in which the randomly spaced events are signals from a Geiger counter exposed to a weak, stable radioactive source. Assume that the Geiger counter has registered Nevents over a time period T, and let the arrival times of the signals be t(1), t(2),... t(N) with 0[less than] t(1) [less than] t(2) [less than] ... t(N) [less than] T.

Then, without psi, the t(I)s are randomly distributed over the continuous range T. One can replace the continuous range T by an interval T divided in a sufficiently large number M of slots. Then the t(I) are randomly distributed over the slots, and the probability that two events fall into the same slot becomes negligible.

Considering that in the limit of large M, with very small N/M one has

R = [e.sup.-N/M*S] [approximate] 1 - N/M*S'

so that we can set in 8 and 9a

r [approximate] 1, 1/1-r [approximate] M*S/N, 1/1-[r.sup.2] = 1/(1-r)(1+r)[approximate] M*S/2*N'

And one gets for the expectation value and variance for Pot in this case

Pot = S * (N - 1) (10)

[[sigma].sup.2] = S * N/2. (11)

Nothing, finally, that the step number n(I) of an event and the corresponding time t(I) are related by

N(I) = t(I) * M/T, (12)

one can write Pot in terms of the observed values t(1), t(2),... t(N):

Pot = [[[sigma].sup.I,J=1...N].sub.I[less than]J] [e.sup.-[absolute val. of t(I)-t(J)]/[tau]] with [tau] = S * T/N. (13)

AN EXAMPLE

Appendix B gives an example of a simple program that can generate random time intervals, calculate Pot, and evaluate the result. The program was written for Microsoft's QuickBASIC [1]. In other versions of BASIC the core of the program should run well. Only the optional recessed lines that provide a visual feedback may need some adjustment. According to the particular feedback the normally black screen flashes in a bright color for each event.

One run uses M = 640 steps, and a small value of p = 1/16 is chosen, so that the time axis appears to the respondent practically as a continuum (i.e., the finite step size is not noticeable).

Each run provides two independent z values, one determined by the deviation of the total number of events from its expectation value of 40, the other determined by the deviation of Pot from its expectation value.

One run of the program gave, for example, N = 35 events, with z = -0.816.

For these values--N = 35 and M = 640-Equation 8 gives the value of 32.16 for the expectation value of Pot. For the standard deviation, the simplified Equation 9a, used in the program, gives a sigma of 3.85, whereas the exact Equation 9 gives [sigma] = 3.67. The agreement between Equation 9 and Equation 9a improves with larger values of Nand M. The actual value of Pot was 35.84, with z = 0.96 calculated with sigma from the simplified Equation 9a. For a reasonable experiment one would certainly need a large number of runs. Then one could evaluate each run separately, or one could string the results of all runs together and treat the experiment as one unit.

The present program uses the computer's quasi-random generator as the source of randomness. Here a true element of randomness enters in the random timing of the start of a run, which determines the seed number.

THE PRACTICAL USEFULNESS OF THE BUNCHING MEASURE

Choice of the Range Parameter S

The range R of the bunching measure Pot contains a factor M/N equal to the average distance between events and a free factor S:

Pot = [[[sigma].sup.I,J=1...N].sub.I[less than]J] [e.sup.-[absolute val. of n(I)-n(J)]/R] (14)

with

R = S*M / N.

The choice of Sis, in principle, arbitrary. If one would want to make no a priori assumptions about S in a psi test, one might begin with a pilot study, explore which choice of S produces the most significant bunching measure, and use this S value for a confirmatory study. Considering the high variability of psi tests, with their many unknown contributing factors, however, a systematic search for an optimal Svalue might not be feasible, and one might want to focus from the start on a psychologically plausible Svalue and leave the search for optimal Svalues for a later, secondary data analysis.

Let us take as a first example the case in which a participant listens to clicks that arrive at random time intervals at a high average rate, of 10/s, so that the participant cannot pay much attention to the individual events. The participant, however, still distinguishes periods of high click activity alternating with periods of low activity. Then one can ask the participant to estimate how long these perceived periods of higher or lower activity last on average. If the estimate is, for example, 0.5s, this would equal in this case five average event distances, and one would take this as a reasonable measure for the range, that is, S = 5. The argument is that a participant who perceives subjectively fluctuations of about 0.5s duration will subconsciously pay attention to fluctuations of this duration and possibly strengthen these fluctuations via PR.

Take as another example the case of a recently completed experiment (Schmidt, 2000) in which the participant received random signals at an average rate of only 4/min. The participant had only a vague feeling that the signals tended to come in bunches, sometimes two or three signals close together, but the participant did not feel any structure exceeding the 15s average spacing between signals. In this case it seemed reasonable to look at the "short-range" bunching measure with S = 1. The experiment gave a significantly positive bunching effect.

Applicability of the Normal Approximation

For simplicity I consider here only the case of short-range bunching with S = 1.

With the help of Equations 6 and 7a one can calculate the z value for each session:

z = (Pot-Pot/ [sigma], (15)

with

z = 0, [z.sup.2] = 1.

The z value from a single session need not be normally distributed, but if one evaluates an experiment with a large number, Se, of sessions with their resulting z -values, z(1), z(2), ... z(Se), and defines a total z value Zby

Z = z(1) + z)2) + z)3) + ... + z(Se)/[square root]Se (17)

then the central-limit theorem says that Z is normally distributed, provided the number of sessions (Se) is large enough. Values of about Se = 50 may be practically sufficient to get a good normal distribution for any reasonable distribution of the z(i) (Hoel, 1966). However, if the z(i) already have an approximately normal distribution then a smaller value of Se may be sufficient. Such an approximately normal distribution for each z(i) can be expected if each session contains a large average number of events. Indeed, one can prove that for very large Nvalues the sum Pot in Equation 14 already approaches a normal distribution. To understand the basic reasoning for this remember that, because of the exponential decline with the distance [absolute val. of I - J], only those pairs (IJ) contribute significantly to the sum in Equation 14 that are close together. If one thus cuts a very long sequence of N events into subsequences that are still fairly long, then the main contribution to Equation 14 comes from pairs within the same subsequence, and Pot appears essentially as the sum of statistically independent contributions from the subsequences. And, if there is a sufficiently large number of subsequences then one can use the central-limit theorem to conclude that Pot is normally distributed. I omit here some finer details of the argument and rather illustrate the matter with an example:

I created Figure 2 by simulating (with the help of a true random number generator) a total of 1.28 million runs given values of M = 256 and N = 32, such that in each run there were 32 events randomly distributed over 256 equal time slots (compare Figure 1). For each run, the value of Pot was calculated, and a corresponding z value was derived, using the exact theoretical values for expectation value and variance supplied by Equations 8 and 9. In Figure 2 is plotted, next to the curve for the normal distribution, the distribution of the zvalues from the individual single runs. A third curve, which comes quite close to the normal distribution (except perhaps in the outer wings), is obtained by always grouping 16 runs into one experiment and plotting the z values of these experiments.

An actual study, with reasonable chances for significant effects, will generally use much more events than the 32 x 16 = 512 events out of 16 short experiments. Then one would come still closer to a normal distribution, and one would have to be cautious only in the unlikely case of extremely small p values in the far-out wings of the distribution. This example is sufficiently close (although not identical) to the case in which the duration of a run is divided into 256 slots and an event can happen at each slot with the probability of 1/8, leading to an average number of 32 events per run.

CONCLUSION

I have introduced a new measure for a possible psi-induced bunching or clustering of random events in time or space. I selected this plausible measure for its mathematical simplicity, which allows for an explicit calculation of the theoretical expectation value and variance. The measure has still an open range-parameter reflecting the size of the clusters to be detected. However, as shown in two examples, psychological considerations may lead one to a reasonable choice of the range parameter. The practical value of the new measure as an independent indicator of psi will be determined by future experiments (or perhaps by the re-examination of past experiments). I completed one experiment that produced a significant clustering effect in a separate article (Schmidt, 2000).

(1.) QuickBASIC (also called QBASIC) comes with Windows 3.1 and Windows 95/98. With Windows 95/98, however, the program file QBASIC.EXE and a corresponding help file are not automatically installed. They can be found on the distribution disk in the folder "other\oldmsdoc\" and must be transferred to the hard disk before use.

REFERENCES

HOEL (1966). Introduction to mathematical statistics. New York: Wiley.

SCHMIDT, H. (2000). PK tests in a pre-sleep state. Journal of Parapsychology, 64, 317-331.

APPENDIX A

Calculation of the Expectation Value

Remember that we have N events distributed over M steps and that we defined

Pot = [[[sigma].sup.I,J=1...N].sub.I[less than]J] [r.sup.[absolute val. of n(I)-n(J)]] (A1)

with

r = [e.sup.-N/M] for short-range bunching (A2)

and, more generally,

r = [e.sup.-N/M*S] with S [greater than or equal to] 1. (A2b)

For convenience we will no longer assume that the n(1) are time ordered but require only that

n(1), n(2), n(3),...n(N) are all different, and (A3)

0 [less than] n(1) [less than or equal to] M.

Then one can in the following consider, for example, n(l), n(2), n(3), and n(4) as "typical" events that can happen anywhere in the whole interval, restricted only by the requirement (A3).

Assuming that the events are randomly distributed (with no two events at the same location), the expectation value of Pot in Equation A1 results from N(N - 1) /2 terms which--on average--contribute equally; that is, one can write

Pot = N(N - 1)/2 * [r.sup.[absolute val. of n(1)-n(2)]]. (A4)

Here n(l) and n(2) are the positions of two events that can happen with equal probabilities at all locations 1, 2,...M with the restriction that n(l)and n(2) are different.

Because there are M(M - 1) equally probable ways to distribute the two events at different locations, one has

Pot = N(N - 1)/2 * M(M - 1) * S (r) (A5)

with

S(r) = [[[sigma].sup.i,j=1...M].sub.i [neq] j] [r.sup.[absolute val. of i-j]].

One can rewrite S(r) as

S(r) = 2. [[[sigma].sup.i,j=1...M].sub.i [less than] j] [r.sup.[absolute val. of i-j]] = 2. [[[sigma].sup.M-1].sub.j=1] [[[sigma].sup.M-j].sub.k=1] [r.sup.k]. (A6)

Note that summation indices I an J in Equation A1 ran from 1 to N, whereas the indices i and j in Equation A6 run form 1 tom M.

X + [x.sup.2] + [x.sup.3] + ...[x.sup.L] = x * [1 - [x.sup.L]/[1 - x], (A7)

one can evaluate the last sum in Equation A6 and obtain

S(r) = 2. [[[sigma].sup.M-1].sub.j=1] r/1 - r] * (1 - [r.sup.M-j]. (A8)

Using Equation A7 again, one can carry out the remaining summation and find

S(r) = 2 * r/1 - r * [M - 1 - r/1 -r*. (1 - [r.sup.M-1])]. (A9)

For the following I assume that the range M/N of the potential is much smaller than the total number M of steps (so that the potential between two events at Step 1 and Step M is negligible). That means

[r.sup.M] [approximate] 0. (A10)

Then one obtains with Equations A5 and A9,

S(r) = [[[sigma].sup.i,j=1...M].sub.i[neq]j] [r.sup.[absolute val. of i-j]] = 2 * r/1 - r * (M - 1/1 - r) (A11)

Pot = N(N - 1)/2M(M - 1) * S(r) = N(N - 1)/M(M -1) r/1 -r * (M - 1/1 - r). (A12)

Calculation of the Variance

To calculate the variance of Pot,

[[sigma].sup.2] =[(Pot - Pot).sup.2] = [Pot.sup.2] - [(Pot).sup.2], (A13)

one still has to evaluate (compare Equation A1)

[Pot.sup.2] = 1/4 [[[sigma].sup.I,J=1...N].sub.I[neq]J] [[[sigma].sup.K,L=1...N].sub.k[neq]L] [r.sup.[absolute val. of n(I)-n(J)]] * [r.sup.[absolute val. of n(K)-n(L)]]. (A14)

The [[N(N - 1)].sup.2] terms in this sum can be grouped into three classes:

* 1. Class A: I, J, K, and L are all different from each other. There are N(N - 1) (N - 2)(N- 3) such terms.

* 2. Class B: Three of the I, J, K, and L are different, with I = K or I = L or J = K or J = L. There are 4N(N - 1)(N - 2) such terms.

* 3. Class C: Two of the I, J, K, and L are different, with I = K and J = L or I = L and J = K. There are 2N(N - 1) such terms.

The average contribution of each term can be written for Class A as

[S.sub.A] = [r.sup.[absolute val. of n(1)-n(2)]] * [r.sup.[absolute val. of n(3)-n(4)]] = 1/M(M - 1)(M - 3) * [T.sub.A], (A15)

with

[T.sub.A] = [[[sigma].sup.i,j,k,l=1...M].sub.i[neq]j[neq]k[neq]l] * [r.sup.[absolute val. of i-j]] * [r.sup.[absolute val. of k-l]]. (A16)

In the last sum i, j, k, and l are all different from each other because n(1), n(2), n(3), and n(4) are all different. The factor M(M - 1)(M - 2)(M - 3) gives the number of ways in which the n(1), n(2), n(3), and n(4) can be placed among the M possible positions.

Similarly, one finds for the other classes:

[S.sub.B] = [[r.sup.[absolute val. of n(1)-n(2)]] * [r.sup.[absolute val. of n(1)-n(3)]] = 1/M(M - 1)(M - 2) * [T.sub.B], (A17)

with

[T.sub.B] = [[[sigma].sup.i,j,k,=1...M].sub.I[neq]j[neq]k] [r.sup.[absolute val. of i-j]] * [r.sup.[absolute val. of i-k]], (A18)

and

[S.sub.C] = [r.sup.2[absolute val. of n(1)-n(2)] = 1/M(M - 1) * [T.sub.C] (A19)

With

[T.sub.C] = [[[sigma].sup.i,j=1...M].sub.i[neg]j]] [r.sup.2[absolute val. of i-j]]. (A20)

Then one can write

4 * [Pot.sup.2] = N(N - 1)(N - 2)(N - 3)/M(M - 1)(M - 2)(M - 3) [T.sub.A] + 4 * N(N - 1)(N - 2)/M(M - 1)(M - 2) [T.sub.B] + 2 * N(N - 1)/M(M - 1) [T.sub.C]. (A21)

Next one calculates the sums [T.sub.C], [T.sub.B], and [T.sub.A]. Comparing Equation A20 with Equation All one sees that

[T.sub.C] = S([r.sup.2]) = 2 * [r.sup.2]/1 - [r.sup.2] * (M - 1/1 - [r.sup.2]). (A22)

To calculate the sum for [T.sub.B] in Equation A18, in which i, j, and k are all different, one first calculates the larger sum, in which only (i,j) are different and (i, k) are different and then subtracts the terms with j = k.

[T.sub.B] = [[[sigma].sup.i,j,k=1...M].sub.i[neq]j,i[neq]k] [r.sup.[absolute val. of i-j]] * [r.sup.[absolute val. of i-k]] - [[[sigma].sup.i,j=1...M].sub.i[neq]j] [r.sup.2*[absolute val. of k-j]] = [T.sub.B1] - [T.sub.B2]. (A23)

Considering the first sum, one carries out first the summations over j and k (for fixed i) and then performs the summation over i. This gives

[T.sub.B1] = [[sigma].sup.i=1...M] F[(i).sup.2] (A24)

with

F(i) = [[[sigma].sup.j=1...M].sub.j[neq]i] [r.sup.[absolute val. of r-j]].

Using Equation A7, one gets

F(i) = r/1 - r * (2 - [r.sup.i-1] - [r.sup.M-i]), (A25)

And

[T.sub.B1] = [[[sigma].sup.i,j,k=1...N].sub.i[neq]j,i[neq]k] [r.sup.[absolute val. of i-j]] * [r.sup.[absolute val. of i-h]] = G(r), (A26)

with

G(r) = [(r/1 - r).sup.2] (4M + 2/1 - [r.sup.2] - 8/1 - r). (A27)

With the second sum in Equation A23 already known, one can write

[T.sub.B] = G(r) - S([r.sup.2])

For calculating [T.sub.A] one uses a similar method. Consider the sum (Su) with more terms than [T.sub.A]:

Su = [[[sigma].sup.i,j,k,l=1... N].sub.i [neq] j,k [neq] l] [r. sup.[absolute val. of i-j]] * [r.sup.[absolute val. of k-l]] = [([[[sigma].sup.i,j=1...N].sub.i [neq] j] [r.sup.[absolute val. of i-j]]).sup.2] = S[(r).sup.2]. (A29)

One can split the original sum, Su, into three parts:

Su = [[[sigma].sup.i,j,k,l=1... N].sub.i [neq] j,k [neq] l] [r.sup.[absolute val. of i-j]]. [r.sup.[absolute val. of k-l]] + 4 * [[[sigma].sup.i,j,k=1... N].sub.i [neq] j [neq] k] [r.sup.[absolute val. of i-j]] * [r.sup.[absolute val. of i-k]]. + 2. [[[sigma].sup.i,j=1...N].sub.i [neq] j] [r.sup.2[absolute val. of i-j]]. (A30)

Here the factor 4 results from the four possibilities i = k, i= l, j = k, or j = j, and the factor 2 results from the two possibilities i = k and j = l or i = l and j = k.

In Equation A30 Su is known from Equation A29, the first sum is [T.sub.A], the second sum equals [T.sub.B], and the last sum equals S(r). Thus we can isolate

[T.sub.A] = S[(r).sup.2] + 2. S[(r).sup.2]) - 4 * G(r). (A31)

Entering the values of [T.sub.A], [T.sub.B], and [T.sub.C] into Equation A21 one obtains, with Equations A12 and A13,

4[[sigma].sup.2] = [N(N - 1) (N -2) (N - 3)/M(M - 1) (M - 2) (M - 3) - [N.sup.2] * [(N - 1).sup.2]/ [M.sup.2] . [(M - 1).sup.2] * S[(r).sup.2] +

2 * N(N - 1) (N - 2) (N - 3)/M(M - 1) (M - 2) (M - 3) * [S([r.sup.2]) - 2G(r)] + 4 * N(N - 1) (N - 2)/M(M - 1) (M - 2) * [G(r) - S([r.sup.2])] + 2 * N(N - 1)/M(M - 1) * S([r.sup.2]) (A32)

with

S(r) = 2r/1-r[M - 1/1-r], G(r) = [(r/1-r).sup.2](4M + 2/1-[r.sup.2] - 8/1-r).

In the limit of large values for N and M one can set

G(r) [approximate] 4M * [(r/1-r).sup.2], S([r.sup.2]) = 2M * [r.sup.2]/1-[r.sup.2], S[(r).sup.2] = [M.sup.2] * [(2r/1-r).sup.2],

and one can replace (N-1), (N-2), and (N-3) by N and do the same for M, except in the first line of Equation A32. Approximating the expression in the bracket by the highest term in N and M, one has

[...] [approximate] -4 * [N.sup.3]/[M.sup.5] * (N - M).

This gives, for large values of N and M,

[[sigma].sup.2] [approximate] [r.sup.2]/1-[r.sup.2] * N/M * [(1 - N/M).sup.2] * N (A32a)

APPENDIX B

Quick Basic Demonstration Program

The recessed program lines provide the optional visual feedback, making the whole screen flash in a bright color for each event.

SCREEN 1: COLOR 0, 2: ' black background

RANDOMIZE TIMER

CLS

M = 640

p = 1 / 16: ' We use a "16-sided die"

DIM n(M) AS INTEGER: ' Events located at n(1),...n(N)

N = 0: ' Event Counter

FOR Stp = 1 TO M

COLOR 0, 1: ' Make screen black

IF INT(16 * RND) = 0 THEN

' We use for this demonstration the computer's quasi-RNG

' But for PK tests the use of a true RING seem preferable

N = N+1

n(N) = Stp:' The events happen at slots n(l), n(2), ... n(N)

COLOR 2, 1: ' Make screen bright green

END IF

Del = 1000:' Adjust this number for desired speed

FOR KK = 1 TO Del: a = EXP(5):NEXT KK:' Time delay

NEXT Stp

COLOR 0, 1:' Turn screen black

Dev = N - M * p: 'Deviation of N from chance

sig = SQR(M * p * (l-p))

PRINT "Number of slots M = "; M

PRINT "Number of events N = "; N;" Deviation from chance Dev = ";

Dev

PRINT "sigma = "; sig;" z = "; Dev / sig

PRINT: 'Now we calculate the short-range bunching measure Pot

r = EXP(-N/M)

Pot = 0

FOR J = 2 TO N: FOR I = 1 TO J-1

Pot = Pot + EXP(- (n(J)-n(I)) * N / M)

NEXT I: NEXT J

PRINT "The observed bunching measure is Pot = "; Pot

Pot0 = N * (N-1) * r * (M-l / (1-r)) / (M * (M-1) * (1-r)):' from

equation (8)

PRINT" The expectation value of Pot is "; Pot0

' Next calculate sigma from the simplified equation (9a)

Upper = r * r * N * N * (l-N/M) * (1-N/M)

Lower = M * (1-r * r)

sig = SQR(Upper / Lower)

Dev = Pot - Pot0

PRINT "The deviation of Pot from chance is "; Dev

PRINT "The standard deviation is "; sig; "which gives z = "; Dev / sig
COPYRIGHT 2000 Parapsychology Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schmidt, Helmut
Publication:The Journal of Parapsychology
Geographic Code:1USA
Date:Sep 1, 2000
Words:5317
Previous Article:A PARADIGM SHIFT AWAY FROM THE ESP-PK DICHOTOMY: THE THEORY OF PSYCHOPRAXIA.
Next Article:PK TESTS IN A PRE-SLEEP STATE.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters