Printer Friendly



ABSTRACT: Two series are reported in which subjects could use psi to continue playing computer solitaire and possibly win a $100 prize. Subjects were told nothing about using psi for this purpose. After they recorded their scores for each set of four games, a pseudo-random algorithm activated by a keyboard button press determined if they could continue. The principal psi measure was the number of sets of solitaire completed. ESP scores in Series I for 48 subjects were nonsignificant, and the only significant post-hoc finding was a positive correlation between the number of sets completed and a question asking how much the subject desired to get a high score. Several procedural modifications were introduced in Series II, most notably use of a hardware RNG, which generated either 20 or 200 numbers. The average number of sets completed by the 50 subjects was once again nonsignificant, but the mean overall standard score based on the RNG output was suggestively above chance (p = .057) and significantly above-chan ce (p = .015) for the 20-trial condition. The squared standard scores correlated positively with the addiction scale of the Solitaire Questionnaire (p = .010), confirming the PMIR hypothesis that the most highly motivated subjects should demonstrate the most psi.

Numerous psi experiments have demonstrated that psi can occur without the intention of the ostensible psi source (e.g., Berger, 1988; Weiner & Zingrone, 1986; West & Fisk, 1953), who may not even be told that psi is involved in the experiment (e.g., Nelson, Bradish, Dobyns, Dunne, & Jahn, 1996; Stanford, Zenhausern, Taylor, & Dwyer, 1976). In theory such a situation might be advantageous, because subjects' doubts about their ability to succeed, which may precipitate either chance scoring or psi-missing, will not arise. In his psi-mediated instrumental response (PMIR) model, Stanford (1977) postulates that subjects must be supplied with some sort of motivation if they are to apply their psi abilities. As the desire to demonstrate psi is not engaged in a covert psi task, some sort of extrinsic motivation must be supplied in such situations.

Investigators in recent years have sometimes turned to computer games to aid in motivating subjects in ordinary psi experiments (e.g., Broughton & Perlstrom, 1986, 1992), and there is no reason why such games cannot be used in covert psi experiments as well. A computer game that I thought would have particular potential in this regard is computer solitaire. First, the game is supplied free of charge with the widely used Microsoft Windows software, so many potential subjects will already be familiar with it. Second, as I can attest personally, for some people the game can be quite addictive, in the sense that once one starts playing, one wants to continue. This suggests an experimental design in which subjects can continue to play solitaire by using psi in a positive manner. Also, because solitaire is a game that can be scored, another sort of motivation can be supplied by offering a prize to the person who achieves the highest score in the experiment. Because subjects' scores cumulate across the number of ga mes played, using psi to play more games also increases the chance of winning the prize.

Because the solitaire game software cannot be accessed by the user, it was necessary to build the psi component into a separate program that was interfaced with computer solitaire. Subjects began by playing a set of four solitaire games. Whether they could move on to another set was influenceable by psi in the following manner. Unknown to the subject, a button press called for by the companion program to change a screen also activated the computer's pseudo-random algorithm so that it generated a number from 1 to 4. This number defined a "key game," and continuation in the experiment depended on subjects' getting a relatively high score in the key game. Thus, subjects could use psi to designate one of the games on which they would obtain a high score as the key game, thereby allowing themselves to continue playing. The experiment was designed so that the more sets a subject plays, the more rigorous the odds of continuing.

Because the key game was selected before any games were played, subjects had to use precognition to foresee which game was going to get the highest score, so as to know which number to try for. Selecting the best number also required precognition, because the pseudo-random algorithm could not be influenced by PK. Instead, the subject had to select a favorable seed number for the algorithm. As the seed number was based on the number of milliseconds between the time the program was initiated and the pressing of the key screen-change button, the timing of this button press determined selection of the key game. Many readers will recognize that this process is the one postulated by May's decision augmentation theory, or DAT (May, Utts, & Spottiswoode, 1995). Thus, a successful outcome in the experiment would provide support for DAT. Although the psi task is complex, in that it requires two successful applications of precognition, psi has been shown in the past to be capable of such feats, in that it seems to be " goal-oriented" (Foster, 1940; Schmidt, 1974, 1975).

It was decided to employ individual difference measures to define those individuals who would most likely use psi positively in the experimental situation. One such predictor was confidence of success, which was operationalized in a variety of ways. A scale developed by Garant, Charest, Alain, and Thomassin (1995) [2] a suitable measure of self-confidence as a generalized trait. As the items on this scale are obvious as to meaning, it was also decided to use a less transparent measure of self-confidence. At the suggestion of a colleague, Dr. James Carpenter, I explored Zuckerman's (1979) Sensation Seeking Scale for this purpose. I found that the Thrill and Adventure Seeking (TAS) subscale, which asks primarily about willingness to engage in potentially dangerous physical activities such as scuba diving, had the greatest potential as an indirect measure of self-confidence, so! included it as well. Next was a commonly used scale in parapsychology, the Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), tha t measures subjects' belief in various types of psi as well as frequency of various kinds of psi experiences. As subjects in the first series most likely assumed they were participating in a psi experiment, even though they did not know how psi would be engaged, belief that such processes are possible (resulting in part from having psi experiences) should make subjects more confident that they could use their psi in the experiment. Psi in everyday life may manifest as luck, and this construct is particularly relevant to a game like solitaire, which has a large chance component. Thus, a short Luckiness Questionnaire was included in the battery (Smith, Wiseman, Machin, Harris, & Joiner, 1997).

Next, a Solitaire Questionnaire developed by the author asked subjects how confident they were that they would get a high score in the experiment, as well as win the prize. It also included questions relevant to subjects' motivation. Motivation to play solitaire was assessed by asking subjects how frequently they had played the game in the past month and past year, as well as how "addictive" they found the game. Other questions asked how much they wanted to get a high score in the experiment and to win the prize.

Finally, a modified form of the Spielberger Trait. Anxiety Scale (Martens, 1977; Spielberger, 1973) was included. Scores on this scale correlated significantly with scores in a PK computer dice game conducted under competitive conditions in each of three experiments. Subjects with the highest scores on anxiety obtained the lowest PK scores (Broughton & Perlstrom, 1986, 1992). As the present experiment also involves a computer game in which subjects are competing for the highest score, Broughton's experiments appear relevant. On the other hand, there are enough differences in the two experiments so that the present research cannot be considered an attempted replication of Broughton's.




The subject sample consisted primarily of Duke University students or staff solicited by advertisements in the campus newspaper. The advertisements included reference to the opportunity to win a $100 prize. Other subjects, who were also told about the prize, were participants in previous psi experiments at the Institute for Parapsychology. I decided prior to data collection that I would test as many subjects as I could before the last day of classes at Duke for the 1997 spring semester. This number proved to be 48.

Apparatus and Software

The experiment was conducted using an IBM microcomputer and Microsoft Windows version 3.11. The solitaire game accompanying this version of Windows was used without modification. A complete description of the game, including scoring, can be found in the Appendix.

The solitaire game was interfaced with a program called ESPitaire written by the author. Subjects could alternate back and forth between the two programs by changing screens. At the beginning of the experiment, ESPitaire welcomed the subject and displayed the previous high score in the experiment. Then the subject was asked to press a function key that replaced the welcoming screen with some instructions. Unknown to the subject, the pressing of this key also led to the generation of a pseudo-random number from 1 to 4. The process used the algorithm accompanying Quick Basic and the seed was the number of milliseconds elapsed between the time the program was activated (before the subject arrived) and the moment the subject pressed the function key. The instructions told the subject to move to the solitaire screen and play a "set" of four consecutive games. After each game, subjects wrote down on a pad their total score and bonus points (if applicable). The subject then reported these scores to the experimenter , who also wrote them down. At the end of the fourth game, the subject moved back to the ESPitaire screen and punched the scores into the computer, observed by the experimenter for accuracy. The computer then indicated on the screen whether the subject could play an additional set of four games.

The purpose of the pseudo-random number was to define one of the four games as the "key game" for that set. Subjects' scores on the key game determined whether they could play another set. After each of the first two sets, they could play an additional set if they received one of their three highest scores on the key game. After sets three and four, they could continue if they received one of their two highest scores in the key game. For subsequent sets, they needed to get their highest score in the key game to continue. The odds for advancing to each set are presented in Table 1. If there was a tie (which rarely happened), either member of the tie could count as a high score.

When the computer indicated no more games could be played, it also displayed the subject's cumulative score. If it was the highest score so far in the experiment, this fact was also displayed on the screen, with congratulations. If subjects could play another set, they returned to the welcoming screen and again pressed the function key, which led to the selection of a new key game.


Subjects were asked to complete the following questionnaires: The TAS subscale of the Zuckerman Sensation Seeking Scale (Form V), the Garant Self-Confidence Scale, the Australian Sheep-Goat Scale, the Wise- man Luckiness Questionnaire, the Solitaire Questionnaire, and the Spielberger State Anxiety Scale, in that order.


Subjects were given the option of coming for a preliminary session in which they could practice playing the computer solitaire game for one-half hour. Subjects unfamiliar with the game were given a brief tutorial by the author before the practice. Experienced players generally did not volunteer for this session.

The second, or main session, was generally held on a different day than the preliminary session. The subject was greeted by the experimenter, offered coffee, a soft drink, or water, and accompanied to the test room. The subject sat in a straight-back chair at a table on which was placed the computer screen, but which also allowed sufficient space for writing. The experimenter sat in back of the subject, so that he could look over the subject's shoulder onto the computer screen. A brief explanation of the procedure by the experimenter included no mention of psi, so it was left to the subject's imagination as to what role it would play in the session; no subjects asked about it. Following the explanation, the subject signed a consent form and completed the questionnaires listed above. The subject was then told to place the questionnaires off to the side, so the impression could not be left that the experimenter was reading them over while the subject played the game. Then the subject completed the number of so litaire games allowed by the computer. After the computer indicated that no more sets could be played, the experimenter explained the details of the study to the subject, but asked that these details not be revealed to anyone who might be a subject in the future. The subject was then given the opportunity to ask any additional questions about the study and thanked for participating.


Randomness of Targets

The Quick Basic pseudo-random number algorithm was tested by converting the output to numbers from 1 to 4, and testing 10,000 of these numbers for the equiprobability of occurrence at the singlet and doublet levels. The results were well within chance boundaries at both levels. Moreover, the number of times each target alternative appeared in the sample of 139 targets actually generated for the experiment did not depart significantly from chance expectation, [X.sup.2](3, N=48) = 4.63, p .403.

Overall Effects

There were two dependent variables. The primary dependent variable was the number of sets of four games completed by each subject. The expected null value was obtained by a Monte Carlo simulation in which the computer generated 1,000 bogus experiments of 48 sessions each, utilizing the same software used in the actual experiment and sampling from a population of key game designations (1-4) consisting of the 139 real targets. The mean of the 1,000 bogus means was found to be 2.778. (This is close to the theoretical mean of 2.781 as illustrated in Table 2. [3]) The actual mean number of sets completed by the 48 subjects was 2.896. The two-tailed statistical significance of this value was obtained by summing the number of bogus values to the outside of [+ or -]2.896 on the Monte-Carlo distribution curve. The result was nonsignificant, p = .575.

The secondary dependent variable consisted of standard scores based upon the log-transformed solitaire scores subjects obtained in each set of four games, including bonus points where applicable. The mean of the four scores was subtracted from the score of the key game, and this difference was divided by the unbiased standard deviation of the four games. If a subject played more than one set, the individual set scores were averaged. These scores are influenced somewhat by subjects' skill in playing solitaire, particularly with respect to their variance. Skilled players obtained some very high scores, which resulted in their scores having higher variance than those of less skilled players. The expected null value of these standard scores was determined by using the same Monte Carlo simulation described above, except that subjects' standard scores were substituted for the number of sets they played. The null value was determined to be -.031, whereas the mean of subjects' composite standard scores in the actual experiment was +.176. This value was not significant by the method described above for set totals, p = .397.

The Spearman correlation between the set scores and standard scores was +.670.

Correlational Effects

The Solitaire Questionnaire was factor analyzed using the standard cutoff of eigenvalue = 1 and an orthogonal varimax rotation. Two factors emerged: The first consisted of the questions asking how many games the subject had played in the previous month, how many they had played the previous year, and how "addictive" they found solitaire. Subjects' responses to these three items were added together to form an "addiction" scale. The second factor also consisted of three items: how confident they were about winning the prize, how confident they were about getting a high score, and how much they wanted to win the prize. Scores on these three items were added together to form a "confidence" scale. The one remaining item, how much subjects wanted to get a high score, was treated separately.

There were several significant (p [less than] .05, two-tailed) Spearman correlations among the predictor variables. The solitaire confidence scale was positively correlated with both the Garant Self-Confidence Scale, [r.sub.s](45)= .365, p = .012, and the Zuckerman Sensation Seeking Scale, [r.sub.s](45)= .296, p = .043, but, contrary to expectation, the latter two scales were uncorrelated with each other, [r.sub.s](46) = -.024. Wanting to get a high score in solitaire was negatively correlated with Wiseman's Luckiness Questionnaire, [r.sub.s](45) = -.302, p = .039.

By plan, Spearman correlations were used to assess relationships between the questionnaire scores, including the three components of the solitaire questionnaire, and scores on the two ESP dependent variables. The only significant result was a positive correlation between the number of sets played and how much subjects said they wanted to get a high solitaire score, [r.sub.s](46) = .335, p = .020. In other words, subjects tended to get what they wanted.


The results of Series I were generally disappointing, but during the course of this series several ways of improving the methodology became apparent. It was thus decided to incorporate these changes in a second series, described below.



A fixed sample size of 50 was set in advance for this series. Subjects were recruited in the same manner as for Series I. A second $100 prize was offered to the person who completed the most number of sets of solitaire, in addition to the prize for the highest score. [4] This second prize allowed for a reward to be given based on a pure psi measure. Also, Series II was introduced as a test of luck, which gave subjects a label to substitute for psi as the process under investigation. This made Series II even more "covert" than Series I, so far as psi is concerned.

Apparatus and Software

A Bierman hardware REG replaced the pseudo-REG used in Series I. Although psi could only be demonstrated in Series I through the process specified by decision augmentation theory, the use of the hardware REG allowed for either DAT or traditional PK to operate, thereby increasing the likelihood of a psi effect of some sort. The button press the subject made that activated the REG was moved from the beginning of the set to the end of the set, or more precisely, the point at which the computer asked the subject to confirm that the four scores he or she had just typed in were correct. Thus, the key game was determined after the scores had been determined (and were in view of the subject), which means that the subject needed to access less information by psi than in Series I. The button press caused the REG to generate a sequence of random digits from 1 to 4. The first of these four target alternatives to be generated 20 times defined the key game. It was decided after the first 25 subjects to increase the prespec ified number of times the target number had to be generated from 20 to 200. This not only provided a more sensitive psi task, but it also created a pause of several seconds during the period the REG was being sampled, which was also when subjects were waiting to find out if they could complete another set of solitaire games. Thus, these subjects were likely in a focused, anticipatory state of mind during the actual psi task (although they did not realize it was a psi task).

It was decided not to have the computer present the previous high score at the beginning of the session. As this number had differed markedly from session to session, it was an uncontrolled variable in Series I. Also, the previous high score soon became rather large, and subjects who did not get off to a good start in terms of solitaire scores might have become prematurely discouraged.

In all other respects, the software for Series II was identical to Series I.


The Sensation-Seeking Scale was omitted because it had not correlated at all with the more face-valid Garant Scale measure of self-confidence in Series I; thus, the Sensation-Seeking Scale did not seem to fulfill its intended role as a less transparent measure of self-confidence. The attitude questionnaire was also omitted, because only a small number of subjects in Series I were not believers in psi. Even though two of the principal incentives for participation, the prize and the opportunity to play solitaire, were not parapsychological, "goats" for the most part did not volunteer. A question was added to the Solitaire Questionnaire, which asked how lucky (in the sense of good luck) the subject expected to be in the experiment. Finally, in order to further reinforce the packaging of the study as an experiment about luck, the Luckiness Questionnaire was moved to the front of the packet. Otherwise, the tests were administered in the same order as in Series I.


Except as noted above, the sequence of events in Series II was the same as Series I in all significant respects.


Randomness of Targets

The hardware REG was tested by Dr. Richard Broughton using the analyses provided by Bierman and found to be satisfactorily random. An additional test by the author comparing the frequencies of the individual targets (1-4) over 10,000 trials, with the board inserted in the computer to be used in the experiment, yielded a distribution quite close to chance expectation, [[chi].sup.2] (3, N=50) = 0.514. Further tests were not deemed necessary because, as described below, the experimental values were compared to expected values obtained by Monte-Carlo simulations with the same hardware REG and the same software used for the experiment proper.

Dependent Variables

A new dependent variable was added for Series II, having to do with the output of the REG. For each set, a standard score was determined using the formula

H - Np/[square root]Npq,

where H is the number of random digits assigned by the REG to the 1, 2, or 3 games (depending on set number) with high enough scores to allow subjects to continue if one of them was the key game [5], N is the total number of digits, p is the proportion of games represented in H (.25, .50, or .75), and q= 1 - p. If a subject completed more than one set, the set scores were averaged. This new standard score variable, which (like number of sets played) is a pure psi measure, was substituted for the previous standard score measure based on subjects' solitaire scores. The correlation between the squared and unsquared standard scores was [r.sub.s] = +.358 [6].

Overall Effects

The mean number of sets completed by the 50 subjects was 2.820. Based on the same Monte-Carlo method used for Series I, this mean is not significant, p= .858. The method for determining the two-tailed p-values was also the same as used for Series I for this and all the remaining analyses in this section.

For the REG standard scores, I decided a priori to evaluate both the unsquared and squared values, the latter being a measure of variance. Although I expected the null parameters to be the same, regardless of whether the REG standard scores were based on 20 or 200-trial units, I decided it best to obtain separate Monte-Carlo estimates for each case. Estimates were obtained using the same software used in the actual experiment as well as the same hardware REG. For each case, I had the computer generate 2,000 bogus experiments of 25 sessions each. For the 20-trial case, the null unsquared standard score was +.010, whereas the mean experimental standard score was +.832. This outcome is associated with p = .015. Thus, for the 20-trial case, subjects scored significantly above chance. For the 200-trial case, the null unsquared standard score was +.005 and the mean experimental standard score was +.124. This value was not significant, p = .745.

The two null values suggest that the population value is zero, so it was decided to perform a confirmatory single-mean t-test for the 20-trial case, with the subject as the unit of analysis and MCE of zero. This test was also significant t(24) = 2.16, p = .041. However, a t-test comparing the 20 and 200-trial cases did not yield a significant difference, t(48) = 1.31, p= .198.

A significance test for the 20 and 200-trial cases combined using the Monte-Carlo distributions can be obtained by taking the 1-tailed p-values for each case, which were .010 for the 20-trial case and .362 for the 200-trial case, [7] converting them to zs, and combining the latter by the Stouffer method. This yields z = 1.90, p .057, two-tailed. The overall mean standard score for the 50 subjects was .478. Evaluated against MCE of zero with the subject as the unit of analysis, t(48) = 1.75,p = .087. Thus, there was suggestive overall hitting in the entire sample for the unsquared REG standard scores.

As for the variance, the Monte-Carlo simulations yielded a null squared standard score of 3.210 for the 20-trial case compared to an experimental mean of 3.525. This value was nonsignificant, p = .812. For the 200-trial case, the Monte-Carlo null value was 3.028 and the experimental mean was 4.327. This value too was nonsignificant, p = .251. Because these values are close to chance, no further analyses will be reported. [8]

Correlational Effects

The solitaire questionnaire was factor analyzed in the same manner as Series I. Again, the analysis program extracted two factors. The addiction factor was essentially the same as in Series I, so the addiction scale was unchanged. The new question about luckiness in the experiment loaded highly on the second factor, which otherwise replicated the confidence factor of Series I. Thus, the luckiness item was added to the confidence scale.

To check on possible differences between subject characteristics in the 20-trial and 200-trial conditions, t-tests were performed on each predictor variable with trials category as the grouping variable. The only significant relationship was for the 20-trial subjects to be more confident in winning a prize than the 200-trial subjects, t(47) = 2.41, p= .020. However, when the confidence scale (of which the prize item is a component) was substituted as the dependent variable, the test was not significant, t(47) = 1.43, p= .159.

The one significant relationship to emerge between the dependent and psychological predictor variables was a positive correlation between the squared REG standard scores and the addiction scale, r(48) = +.362, p = .010. This correlation was somewhat higher for the 20-trial subjects, [r.sub.s](23) = .539, than for the 200-trial subjects, [r.sub.s](23) = .265, but the difference between the two does not approach significance, z = 1.05, p = .293. The significant correlation in Series I between number of sets played and wanting to obtain a high score did not replicate, [r.sub.s](48) = -.025.


It would appear that the methodological changes introduced for Series II were somewhat successful in producing more evidence of psi, in that there was almost significant psi-hitting on the standard scores, which attained significance in the 20-trial condition. However, it is not clear which of the several specific modifications introduced to optimize the procedure was responsible for the apparent improvement. The most fundamental change was the shift from a pseudo-REG in Series I to a hardware REG in Series II. As the pseudo-REG required a DAT-type mechanism for success, whereas Series II also allowed for a force-like effect, the data could be construed as lending some support to the force-like models. The other principal change was that subjects needed to obtain less information by psi to succeed in Series II than in Series I. Thus, the relative success in Series II is also incompatible with the hypothesis that psi is goal-oriented, as the complexity of the task was less in Series II. It is also important t o note that the standard score measure which was responsible for all the significant outcomes in Series II, including the correlational effect, could not be calculated in the same way for Series I. Thus, in this sense the results of the two series are not comparable. In other words, the reason for the improvement might simply be the introduction of a more sensitive and appropriate measure of psi. The confounding of these various elements, of course, must be taken into account in drawing any firm conclusions from the process-oriented findings discussed in this paragraph.

The somewhat superior performance on standard scores in the 20-trial sessions is also difficult to interpret, in part because they all occurred at the beginning of the experiment. Thus, the difference between the two types of session, which in any event was not a statistically significant difference, could have been a simple decline effect. However, it is clear that increasing the number of trials, and thereby (in theory) the sensitivity and power of the test, did not have the intended effect of improving the manifestation of psi. In future tests of such a comparison, it would be desirable to control for the amount of time consumed in generating the requisite number of trials. The several-seconds delay experienced by subjects in Series II, while they waited to learn whether they could continue, created a psychological climate during the determination of the key game that was not present in Series I.

Although not formally hypothesized, the significant positive correlation between the squared standard scores and the addiction scale gets to the very heart of the psychological rationale behind the solitaire-test paradigm. Following Stanford's PMIR model, there must be some extrinsic motivation (other than getting a high psi score) in covert psi testing, if good scoring is to be expected. In this experiment, one of the principal intrinsic motives was the desire to continue playing the game, apart from how well one was doing. Only certain people would be expected to have such motivation, and these were in fact the people who obtained the highest magnitude standard scores. Of course, an even more welcome finding would have been for these subjects actually to play more sets than the less motivated subjects. It is likely that only the standard scores were evidential because they were the more sensitive measure; they captured a trend toward selecting a favorable key game that may not have been strong enough to le ad reliably to its actual selection. In much of Stanford's own PMIR research, it was the standard scores, rather than whether his subjects reliably escaped the boring task, that provided the statistical evidence for the hypothesis (e.g., Standford & Associates, 1976; Stanford & Stio, 1976).

Finally, it should be noted that all p-values reported in this paper are uncorrected for multiple analyses. Although it is unclear what the proper number of reference analyses should be for each of these p-values (one of the problems with this approach), it is most likely the case that none of the significant results would survive such a correction. As is always the case (whether the correction is applied or not), these results are evidential only to the degree that they are replicable, or to the degree that they contribute to a body of more-or-less comparable effects shown to be collectively significant by meta-analysis. For meta-analyses and other such comparisons, it is the uncorrected p-values that are appropriate. As implied above, the correlation between the squared ESP standard scores and the motivational variable have some added credibility in this regard because they confirm a key component of Stanford's PMIR model, for which there is independent empirical support from Stanford's own research. The on e qualification is that Stanford's motivational variables correlated with unsquared standard scores.

(1.) An earlier version of this paper was presented at the 1999 Convention of the Parapsychological Association.

(2.) My gratitude to Dr. Alain for sending me a copy of the scale. It was translated from French into English by a native French speaker.

(3.) My gratitude to an anonymous referee for providing these calculations.

(4.) This prize money was contributed by an anonymous donor, who also paid for the purchase of the hardware REG board used in the experiment.

(5.) For example, assume that the subject is playing Set 3, where the key game must be either the first or second highest scoring game to allow continuation. Assume the scores for the four games are 100, 200, 300 and 400 respectively. As only games 3 and 4 give a high enough score to allow continuation, the total numbers of 3s and 4s generated by the REG for this set were summed to yield H for the set.

(6.) This correlation departed substantially from zero because the mean unsquared standard score departed substantially from zero.

(7.) Because the p-values were derived directly from the Monte-Carlo distributions, the one-tailed ps were not exactly one-half of the two-tailed ps.

(8.) Because the Monte-Carlo estimates of variance were somewhat different for the 20-trial case (3.210) and the 200-trial case (3.028), a second 2,000-experiment analysis was conducted for the 20-trial case. The resulting value was 3.136, which falls between the previous two values. This outcome suggests that the earlier discrepancy was due to the values not converging with only 2,000 experiments, rather than a population difference between the 20-trial and 200-trial cases;


BERGER, R. E. (1988). Psi effects without real-time feedback. Journal of Parapsychology, 52, 1-27.

BROUGHTON, R. S., & PERLSTROM, J. R. (1986). PK experiments with a competitive computer game. Journal of Parapsychology, 50, 193-211.

BROUGHTON, R. S., & PERLSTROM, J. R (1992). PK in a competitive computer game: A replication. Journal of Parapsychology, 56, 291-305.

FOSTER, A. A. (1940). Is ESP diametric? Journal of Parapsychology, 4, 325-328.

GARANT, V., CHAREST, C., ALAIN M., & THOMASSIN, L. (1995). Development and validation of a self-confidence scale. Perceptual and Motor Skills, 81, 401-402.

MARTENS, R. (1977). Sport Competition Anxiety Test. Champaign, IL: Human Kinetics Publishing.

MAY, E. C., UTTS, J. M., & SPOTTISWOODE, S. J. P. (1995). Decision augmentation theory: Toward a model of anomalous mental phenomena. Journal of Parapsychology, 59, 195-220.

NELSON, R. D., BRADISH, G.J., DOBYNS H., DUNNE, B.J., & JAHN, R. C. (1996). Field RNG anomalies in group situations. Journal of Scientific Exploration, 10, 111-141.

SCHMIDT, H. (1974). Comparison of PK action on two different machines. Journal of Parapsychology, 38, 47-55.

SCHMIDT, H. (1975). Toward a mathematical theory of psi. Journal of the American Society for Psychical Research, 69, 267-291.

SMITH, M. D., WISEMAN, R., MACHIN, D., HARRIS, P., & JOINER, R. (1997). Luckiness, competition, and performance on a psi task. Proceedings of Presented Papers: The Parapsychological Association 40th Annual Convention, 187-195.

SPIELBERGER, C. D. (1973). State-trait Anxiety Inventory for Children: A preliminary manual. Palo Alto, CA: Consulting Psychologists Press.

STANFORD, R. G. (1977). Conceptual frameworks of contemporary psi research. In B. B. Wolman (Ed.), Handbook of parapsychology (pp. 823-858). New York: Van Nostrand Reinhold.

STANFORD, R. G., & ASSOCIATES (1976). A study of motivational arousal and self-concept in psi-mediated instrumental response. Journal of the American Society for Psychical Research, 70, 167-178.

STANFORD, R. G., & STIO, A. (1976). A study of associative mediation in psi-mediated instrumental response. Journal of the American Society for Psychical Research, 70, 55-64.

STANFORD, R. G., ZENHAUSERN, R, TAYLOR, A., & DWYER, M. A. (1975). Psychokinesis as psi-mediated instrumental response. Journal of the American Society for Psychical Research, 69, 127-133.

THALBOURNE, M., & DELIN, P. S. (1993). A new instrument for measuring the sheep-goat variable: Its psychometric properties and factor structure, Journal of the Society for Psychical Research, 59, 172-186.

WEINER, D. H., & ZINGRONE, N. L. (1986). The checker effect revisited. Journal of Parapsychology, 50, 85-121.

WEST, D. J., & FISK, G. W. (1953). A dual ESP experiment with clock cards. Journal of the Society for Psychical Research, 37, 185-189.

ZUCKERMAN, M. (1979). Sensation seeking: Beyond the optimal level of arousal. Hillsdale, NJ: Lawrence Erlbaum.



Windows solitaire uses a standard 52-card "deck" of playing cards that the computer randomly "shuffles" before each game. At the beginning of the game the subject sees seven "row stacks" placed side by side across the middle of the screen. At the top of each stack is an exposed card with from 0 to 6 face-down cards beneath it, the number of face-down cards increasing by one from the left-most to the right-most stack. The subject "builds" each row stack toward the bottom of the screen by placing over each exposed card (so as to partly shield it) a face-up card one rank lower and of opposite color. For example, a red queen could be placed over a black king, and a black jack could then be placed over the red queen. A covering card can come from one of two places. First, it can come from another row stack. If, as is usually the case, this is the first (top-most) exposed card in a stack, the face-down card beneath it is turned face-up (if there are any left). If a stack is empty, it can be started up again with a ny king. Second, a covering card can come from the remaining cards in the deck, which initially are in a face-down pile at the top-left corner of the screen. This experiment used the three-card option, which means that the cards in the deck are turned over three at a time. Although this in done in such a way that the bottom two cards in a set are partly exposed, only the fully exposed top card can be moved. Once the subject goes through all the cards, the deck is turned over and the whole process repeated.

The object of the game is to place all the cards in four "suit stacks," which are located in a row to the right of the deck. The suit stacks are empty when the game begins. Each one is built progressively, beginning with the ace and ending with the king, all of the same suit. Once an ace is fully exposed either in a row stack or in the deck, it can be used to begin the corresponding suit stack. A two of the same suit can then be placed on top of the ace, and so forth. Only fully exposed cards (the last card in each row stack or the top-most of each triplet from the deck) can be placed on a suit stack. An example of how the screen looks during a game is presented in Figure 1.

Subjects manipulate the cards by using the game paddle. They keep making moves until no more moves are available or they "win" the game (all cards are in the suit stacks). The "standard scoring" option was used for this experiment. Each time a face-down card is turned over in a row stack or a card is moved from the deck to a row stack, 5 points are awarded. Whenever a card is moved to a suit stack, 10 points are awarded, but 15 points are deducted if a card is moved from a suit stack back to a row stack. After the first four, 20 points are deducted for each pass through the deck, and 2 points are deducted every 10 seconds the game is being played. If a game is won, the number of points accumulated up to that time are supplemented by bonus points. How many bonus points the player receives depends on the number of seconds it takes to play the (won) game, and they can be more than 10 times the number of ordinary points accumulated. This provides a powerful incentive to play the game as quickly as possible. The number of elapsed seconds and the cumulated points are exposed in the lower right corner of the screen throughout the game, as are the bonus points at the end of won games.
COPYRIGHT 2000 Parapsychology Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2000 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:parapsychology
Publication:The Journal of Parapsychology
Geographic Code:1USA
Date:Jun 1, 2000

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |