Conceptual and methodological issues in the measurement of mental imagery skills in athletes.
The typical research paradigm used in the field of mental practice involves a "before-after" treatment comparison between people who have been exposed to imagery training for a given task/sport and those in various control conditions (e.g. physical practice only). Unfortunately, little attention has been devoted to a rather obvious flaw in this strategy: What happens if the treatment effects are confounded by individual differences in imagery ability? As Hall (1985) points out, "if the subjects in an experimental condition are asked to use an imagery strategy and these subjects are all low imagers, it is likely no effect or only a small effect for the condition will be shown". Clearly, therefore, the use of imagery tests can enhance the accuracy of mental practice research by ensuring that subjects are matched for visualization abilities before experimental treatments are administered. But which imagery tests are most suitable for this purpose?
As Table 1 shows, a variety of psychological tests have been developed to measure individual differences in imagery ability. But how valid and reliable are these tests? Unless they satisfy conventional psychometric criteria, they will have only limited value to coaches and researchers who wish to design or evaluate imagery training programs for athletes. Therefore, the main purpose of this paper is to evaluate the psychometric adequacy of the most popular tests available for the assessment of the mental imagery skills of athletes. A secondary objective will be to consider the principal conceptual and methodological issues encountered in imagery research.
The impetus for this paper comes from two sources. First, although there have been several reviews of imagery tests (see Anderson, 1981; Ernest, 1977; White, Sheehan & Ashton, 1977), the most recent one (by Sheehan, Ashton & White, 1983) was completed almost a decade ago. Since then, additional tests have been published which are potentially useful to athletes because of their focus on kinesthetic imagery. These measures include the Movement Imagery Questionnaire (MIQ) (Hall & Pongrac, 1983) and the Vividness of Movement Imagery Questionnaire (VMIQ) (Isaac, Marks & Russell, 1986). Therefore, an up-to-date evaluation of imagery measures is required. The second reason for exploring this field is that research on individual differences in imagery abilities is plagued by a variety of conceptual and methodological problems. For example, Hiscock (1978) concluded that "it is not clear what imagery questionnaires really measure or what criteria are appropriate for validating them". The implications of these issues for the assessment of the visualization skills of athletes must be examined.
This paper includes a brief outline of the nature and characteristics of mental imagery, examination of the psychometric properties of the most commonly used imagery tests and analysis of the main conceptual and methodological problems afflicting imagery research in sport psychology. Finally, recommendations will be provided to enable future researchers to address these difficulties.
Mental Imagery: Nature and Characteristics
According to Solso (1991), mental imagery refers to "a mental representation of a nonpresent object or event". Although this representation is not confined exclusively to the visual sense (e.g., one can "hear" a favorite tune or "feel" a favorite piece of sports equipment in one's imagination), it is usually associated in sport psychology with using "the mind's eye" (see Porter & Foster, 1988).
In general, researchers (e.g. see Denis, 1989; Kosslyn, 1985) believe that mental images have two fundamental characteristics or dimensions: vividness and controllability. The vividness of an image denotes its clarity and "sharpness" or sensory richness (Richardson, 1988), whereas the term controllability refers to the ease and accuracy with which an image can be transformed or manipulated in one's mind (Kosslyn, 1990).
Traditionally, measurement of imagery ability has involved assessment of either imagery vividness or of imagery controllability skills. The vividness dimension of visualization is generally measured through subjective, self-report tests. These tests usually require people to rate the clarity of images evoked by descriptive items (e.g. "the clapping of hands in applause") on 5- or 7-point Likert scales, although there appears to be no good reason for the counter-intuitive practice of assigning lower scores to greater vividness. In this regard, Mueller (1986) recommends that "responses indicating a positive attitude towards the attitudinal object...result in high scale scores".
By contrast, imagery control skills are usually measured by requiring people to complete objective tasks which elicit spatial visualization abilities. For example, they may be required to transform, in their minds, images of three-dimensional target shapes in order to decide whether or not they are congruent with given alternatives (Shepard & Metzler, 1971). Sometimes, however, the self-report paradigm has been used to assess image controllability, for example Richardson's (1969) modification of Gordon's (1949) test of imagery control.
Although these two properties of imagery are usually considered as separate dimensions, the theoretical basis of a conceptual distinction between them remains obscure. No researcher seems to have raised the obvious question of how an imaginary experience can be controllable if it is not vivid. The neglect of such issues highlights the need for greater conceptual rigor in imagery research.
Classification of Mental Imagery Tests
Following a literature search of Psychological Abstracts, the following eight measures were identified as being the most commonly used instruments to measure individual differences in mental imagery. We omitted the unpublished vividness of imagery test developed by Martens (1982) because its psychometric adequacy is unknown.
From this table, two trends are evident. First, seven out of the eight tests in this table employ a self-report format. The exception is the Group Mental Rotations Test (GMRT) (Vandenberg & Kuse, 1978), an objective measure which yields "correct" and "incorrect" answers. The predominance of the subjective approach to imagery measurement may help to explain why sport researchers have tended to accept uncritically performers' introspections about their imagery experiences (Murphy, 1990). Second, the QMI, the IDQ and the GMRT have not been used by sport researchers, as far as could be ascertained from a search of Social Science Citation Index. The neglect of the Questionaire on Mental Imagery (QMI, Betts, 1909) is perhaps understandable in view of its length (150 items) and the fact that a shorter, yet psychometrically equivalent, version of this inventory is available (the Shortened Questionnaire on Mental Imagery, SQMI) (Sheehan, 1967). It is less clear, however, why Paivio's (1971) Individual Differences Questionnaire (IDQ) and Vandenberg & Kuse's (1978) Group Mental Rotation Test have been ignored--especially as imagery styles and imagery control skills are important variables in research on "mental practice." Recent research by Moran (1991) suggests that this latter test may be useful in the prediction of canoe-slalom performance.
TABULAR DATA OMITTED
Construct Validity of Imagery Tests
The construct validity (Cronbach & Meehl, 1955) of a psychological test refers to the degree to which it accurately measures the theoretical construct which it was designed to measure. In general, construct validity is inferred from evidence of satisfactory reliability and adequate convergent and discriminant validity (Allen & Yen, 1979). Briefly, the "reliability" of a test concerns the consistency or stability of its scores over time. "Convergent validity" is demonstrated by high correlations between different measures of the same construct, and "discriminant validity" by low correlations between scores on tests measuring different constructs. Let us summarize this evidence for each test in Table 1 before discussing some general issues in the field.
The Questionnaire on Mental Imagery (QMI), whether in its original form (Betts, 1909) or abbreviated version (SQMI) (Sheehan, 1967), is the prototypical test of imagery vividness. Most of the other scales in this field adopt its Likert-style rating format (and sometimes borrow its items). The shortened version of this test is a 35-item, self-report instrument which displays high internal consistency (e.g., r = 0.95; Juhasz, 1972), satisfactory stability (e.g., 0.78 over a 7-month interval; Sheehan, 1967) and a homogeneous factorial structure (Sheehan, 1967).
The Gordon Test of Imagery Control (GTIC, Gordon, 1949, modified by Richardson, 1969) is a short self-report measure which appears to be quite reliable. For example, Mc Kelvie & Gingras (1974) reported a split-half value of 0.76 and a test-retest coefficient of 0.84 over a three-week interval. The purity of this instrument is questionable, however, as it correlates significantly (r = 0.47) with the QMI, which assesses vividness, not controllability, of imagery. This correlation (which shows that 22% of variance is shared between a vividness and a controllability scale) raises the possibility that the imagery dimensions of vividness and controllability are neither conceptually nor empirically distinguishable. If they are not, then imagery researchers (e.g. Kosslyn, 1985) and sport psychologists (e.g. Weinberg, 1988) who promulgate this distinction may need to revise their theories. A further weakness of the GTIC stems from its relative brevity. More precisely, its power to generate variance (which is the purpose of any psychological test) is curtailed by the fact that is has only 12 items. To increase this variance-generative capacity, psychometric theory suggests that the test should be lengthened (Ghiselli, Campbell & Zedeck, 1981).
The Individual Differences Questionnaire (IDQ) (Paivio, 1971) purports to measure both visual and verbal "thinking modes." It is not widely used in imagery assessment because its verbal scale is deemed irrelevant to the measurement of visualization skills. However, this test seems to be adequate psychometrically (see Paivio & Harshman, 1983 for factor analytic evidence).
The Vividness of Visual Imagery (VVIQ) (Marks, 1973) scale is probably the best researched, and most popular, test of imagery currently available (see Marks, 1989b, for a bibliography of relevant research). This inventory is a 16-item extension of the visual sub-scale of Betts' (1909) questionnaire. Its construct validity has been debated vigorously (Chara & Hamm, 1989; Mc Kelvie, 1990; Marks, 1989a). It appears to be quite reliable (e.g. estimates of test-retest reliability range from 0.67 to 0.87) (Mc Kelvie, 1990) and can claim validation support from evidence that it predicts various types of visual memory performance (Mc Kelvie & Demers, 1977), although this finding was refuted by Chara & Hamm (1989). Part of the difficulty here, as with most imagery tests, is that "there is no single criterion ready for the VVIQ to predict" (Mc Kelvie, 1990, p. 552).
The Group Mental Rotations Test (GMRT) (Vandenberg & Kuse, 1978) is a 40-item, objective test of spatial visualization ability. It requires subjects to make judgements concerning the orientation of various three-dimensional drawings |based on stimuli used by Shepard & Metzler (1971)~. Each item contains a target figure and four alternatives, two of which are correct and two of which are incorrect (mirror-image) "distractors." Performers must decide which two of the four alternatives are the same as (although in a different orientation from) the target figure.
This test displays impressive reliability (KR 20 coefficient of 0.88, test-retest r = 0.83) (Vandenberg & Kuse, 1978), Cronbach's alpha of 0.90 (Moran, 1991). Convergent validity is supported by the fact that it is positively correlated with criterion measures of spatial ability (e.g. the Spatial Relations sub-test of the Differential Abilities Test) (Bennett, Seashore & Wesman, 1947). In addition, GMRT scores were significantly correlated (Spearman's rho = 0.44, p|is less than~0.05) with successful performance in World Cup canoe-slalom racing (Moran, 1991).
The remaining two tests in Table 1 may be useful to sports researchers because they attempt to measure people's ability to imagine movements. These scales could be useful in sport research because some studies on mental practice suggest that kinesthetic imagery facilitates motor skill learning (Hale, 1982).
The earlier of the two, the Movement Imagery Questionnaire (MIQ) (Hall & Pongrac, 1983), comprises 18 items which claim to assess the capacity to see and feel movements. It has been used in recent research on mental practice (Jowdy & Harris, 1990) but has not yet been validated adequately.
Finally, the Vividness of Movement Imagery Questionnaire (VMIQ) (Isaac et al., 1986) is a kinesthetic version of Marks' (1973) VVIQ. It contains 24 items which measure "visual imagery of movement itself and imagery of kinesthetic sensations". The test-retest reliability of this scale is estimated at 0.76 (for a 3-week interval) (Isaac et al., 1986). Convergent validity is supported by a significant correlation with the VVIQ (r = 0.81). So, it seems that the VMIQ is slightly better established than the MIQ.
In summary, the imagery tests in Table 1 appear to have satisfactory internal consistency, and test-retest reliability (by the conventional criterion of equalling or exceeding r = 0.7) (Kline, 1983). But in psychometric theory, reliability is a necessary but not sufficient condition for validity: A test must be reliable to be valid but reliability, by itself, does not guarantee validity. To use a sporting analogy, it is not sufficient for a tennis player merely to clear the net consistently when serving (a reliable performance): the player must place the ball within the opponent's appropriate service box (a "valid" performance). Unfortunately, the validity of most of the imagery tests in Table 1 is unknown. In other words, we do not know if they hit the target because their convergent validity (i.e. the degree to which they converge on the imagery construct by being positively inter-correlated) and discriminant validity (i.e. the degree to which they diverge from unrelated constructs by being uncorrelated with them) have not been established.
The absence of validity data for many imagery tests is doubly problematic. On one hand, this situation could mean that researchers are publishing tests prematurely, before the arduous process of test validation has been completed. Unfortunately, this practice appears to be quite common in sport psychology. To illustrate, Ostrow (1990) reported that 20% of the tests in his directory of psychological inventories for sport lacked any validation data. On the other hand, the questionable validity of imagery inventories raises a problem for journal editors: Should manuscripts which describe new tests be accepted for publication in the absence of adequate psychometric evidence? After all, we must remember that construct validation of psychological tests is an ongoing enterprise not a "once-off" exercise. To explain, it is not the test which is validated but rather, the tester's inferences which are drawn from the theory under investigation (Cronbach & Meehl, 1955).
Conceptual and Methodological Difficulties in Imagery Assessment
Several problems afflict current research on imagery assessment. Two of these concern conceptual problems and the other two involve methodological difficulties.
The first conceptual difficulty in this field concerns the definition and homogeneity of the construct of mental imagery. Recent research (e.g. Kosslyn, 1990) suggests that imagery ability is not a single trait. Instead, it involves such diverse skills as the ability to generate, inspect and rotate spatial information. Furthermore, these skills may involve different aspects of the information processing system. For example, whereas image generation activates long-term memory processes, image control appears to draw upon working memory resources (Denis, 1989). Indeed, the multidimensionality of the imagery construct may explain why self-report and objective tests of imagery tend to be largely unrelated (Ernest, 1977). To explain, the former scales probably tap image generation skills whereas objective tasks may assess image transformation ability. Unfortunately, most researchers in sport psychology still cling to the idea that "imagery" skills are quantifiable by using a single test score.
Another conceptual problem in the study of imagery stems from its ephemeral character. More precisely, it is unclear what behavioral referents should be used to validate this phenomenological construct. A possible answer to this question could come from recent research by Kosslyn et al. (1990) on the purpose of imagery in everyday cognition. Briefly, using evidence from a diary study, they found that people reported using imagery for such diverse activities as problem solving (e.g. forming cognitive maps for navigation purposes), mental practice (e.g. forming an image of a swimming stroke), memory improvement (e.g. trying to remember a name by forming an image of someone's face) and emotional or motivational inducement (e.g. using imagery to inculcate a relaxed state when under stress). These findings, especially the last-mentioned (e.g. as it relates to "success-visualization"), suggest possible research avenues for construct validation of imagery tests in sport. In particular, what is needed is an intensive study of the purposes for which athletes employ imagery in training and competition. Then, these behavioral applications of imagery could be used to validate new tests of this construct. A promising start to this task may be found in the research by Hall et al., (1990) on imagery use among athletes.
The main methodological flaw in the field of imagery assessment arises from the fact that subjects may experience problems in making the judgements about mental experiences which most imagery tests (especially those concerned with vividness) require. For example, as visualization skills cannot be measured directly, researchers must either rely on the veracity of people's reports about them (the self-report strategy) or else devise tasks which elicit these skills directly (objective measurement approach). Each of these strategies faces difficulties.
The problem with self-report techniques is that they are susceptible to contamination from several response biases (see Di Vesta, Ingersoll, & Sunshine, 1971; Ernest, 1977). For example, people's ratings on vividness tests may be influenced by two response "sets." First, the bias of "social desirability" may influence respondents to portray themselves as having a vivid imagination regardless of their true imagery abilities (Di Vesta et al., 1971). A second potential bias concerns "acquiescence," or the tendency to apply the same rating to each vividness item, regardless of its content. One way of counteracting contamination by the former bias is to administer a scale of social desirability (Crowne & Marlowe, 1964) routinely when assessing imagery skills so that the veracity of subjects' responses could be monitored. The problem of acquiescence could be reduced significantly by using reverse-scoring procedures (see Mueller, 1986).
Another methodological difficulty for self-report scales of vividness stems from inconsistency of subjects' ratings. Thus there is "no way of knowing whether or not Ss are applying the same standard in making their ratings" (Anderson, 1981, p. 157) when they are completing the inventories.
The problem with objective imagery assessment, however, is that it is built on rather shaky theoretical foundations. To explain, researchers cannot easily devise tasks which elicit imagery skills unless they understand the behavioral functions of mental imagery in everyday life. This difficulty was identified both by Kosslyn, Brunn, Cave & Wallach (1984) and by Poltrock & Brown (1984) who complained that imagery test items owe more to the intuition of their authors than to "any theory of how imagery is used". Indeed, cognitive psychologists are sharply divided on the issue of how imagery is used in daily life--the "functional significance" controversy (Eysenck & Keane, 1990, p. 208). On the one hand, the "functional equivalence" theorists (e.g. Kosslyn, 1990) believe that mental imagery uses the same information processing pathways which are activated in perception. Conversely, other researchers (e.g. Pylyshyn, 1981) argue that imagery is merely an "epiphenomenon" or artefact of our experience. These latter theorists believe that while imagery may accompany our thinking, it has no causal role in directing our behavior.
This debate over the "functional significance" of imagery has implications for research on mental practice (explained at the beginning of this paper). Specifically, functional equivalence theory seems to support the "neuromuscular" explanation (Jacobson, 1931) of mental practice on the grounds that, as imagery shares processing resources with perception, then the imaginary rehearsal a motor skill could activate neuromuscular activity which resembles that found in physical performance of the skill in question. But critics like Pylyshyn (1981) would argue that mental practice effects are not caused by imagery, but by richer or more abstract propositional coding of the elements of the motor skill being rehearsed (the "symbolic learning" explanation of mental practice; see Feltz & Landers, 1983).
Until this debate over the functional significance of imagery is resolved, sports researchers could benefit from conducting "task analyses" of the imagery requirements of various sport skills. This would encourage theoretical examination of the reasons why visualization can be useful to athletes in some sports but not in others.
Summary and Recommendations
In summary, this paper has highlighted a variety of conceptual and methodological difficulties which afflict the measurement of mental imagery skills. Unfortunately, the dearth of theoretical analysis in sport psychology, as noted by Landers (1983), has led to a neglect of many of these issues. Accordingly, at present, there is an unfortunate disjunction between theory and measurement in visualization research. The following four recommendations may help to bridge this gulf.
(1) Greater collaboration is required between researchers from the cognitive and psychometric branches of sport psychology. This is necessary because theoretical advances by cognitive psychologists (e.g. Kosslyn, 1990) in understanding the nature and properties of imagery have not yet been translated into the domain of imagery assessment, where the self-report paradigm reigns supreme.
(2) Given the multidimensionality of the imagery construct, researchers could benefit from using such techniques as multidimensional scaling in the validation of imagery inventories. As Murphy (1990) points out, "this technique holds great promise for the assessment of imagery as it allows judgments about an image to be made on a wide range of dimensions" such as level of detail experienced, intensity, kinesthetic qualities, controllability and vividness.
(3) Rigorous programs of construct validation should be undertaken for imagery inventories used in sport research (e.g. those listed in Table 1). This research could begin usefully by evaluating the widely-used but psychometrically unproven imagery test developed by Martens (1982). Such validation studies should be conducted on samples of athletes from different sports, rather than on samples of convenience from the general population.
(4) Finally, journal editors should not accept for publication manuscripts which are based on the use of unvalidated or idiosyncratic measures of imagery.
Allen, M. J., & Yen, W. M. (1979). Introduction to measurement theory. Belmont, CA: Brooks/Cole.
Anderson, M. P. (1981). Assessment of imaginal processes: Approaches and issues. In T. V. Merluzzi, C. R. Glass, & M. Genest (Eds), Cognitive assessment (pp. 149-187). New York: Guilford.
Bennett, G. K., Seashore, H. G., & Wesman, A. G. (1947). Differential aptitude tests. New York: Psychological Corporation.
Betts, G. H. (1909). The distribution and functions of mental imagery. New York: Teachers College, Columbia University.
Chara, P. J., & Hamm, D. A. (1989). An enquiry into the construct validity of the Vividness of Visual Imagery Questionnaire. Perceptual and Motor Skills, 69, 127-136.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.
Crowne, D. P., & Marlowe, D. (1964). The approval motive: Studies in evaluative dependence. New York: John Wiley.
Denis, M. (1989). Image and cognition. New York: Harvester Wheatsheaf.
Di Vesta, F. J., Ingersoll, G., & Sunshine, P. (1971). A factor analytic analysis of imagery tests. Journal of Verbal Learning and Verbal Behaviour, 10, 471-479.
Ernest, C. H. (1977). Imagery ability and cognition: A critical review. Journal of Mental Imagery, 2, 181-216.
Eysenck, M. W., & Keane, M. T. (1990). Cognitive psychology: A student's handbook. Hillsdale, NJ: Lawrence Erlbaum.
Feltz, D., & Landers, D. (1983). The effects of mental practice on motor skill learning and performance: A meta-analysis. Journal of Sport Psychology, 5, 25-57.
Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement theory for the behavioural sciences. San Francisco: W. H. Freeman.
Gordon, R. (1949). An investigation into some of the factors that favour the formation of stereotyped images. British Journal of Psychology, 39, 156-167.
Hale, B. D. (1982). The effects of internal and external imagery on muscular and ocular concomitants. Journal of Sport Psychology, 4, 379-387.
Hall, C. R. (1985). Individual differences in the mental practice and imagery of motor, skill performance. Canadian Journal of Applied Sport Sciences, 10, 17 S-21 S.
Hall, C. R., & Pongrac, J. (1983). Movement imagery questionnaire. London, Ontario: University of Western Ontario.
Hall, C. R., Rodgers, W. M., & Barr, K. A. (1990). The use of imagery by athletes in selected sports. The Sport Psychologist, 4, 1-10.
Halpern, D. F. (1986). Sex differences in cognitive abilities. Hillsdale, NJ: Lawrence Erlbaum.
Hiscock, M. (1978). Imagery assessment through self-report: What do imagery questionnaires measure? Journal of Consulting and Clinical Psychology, 46, 223-230.
Isaac, A., Marks, D. F, & Russell, D. G. (1986). An instrument for assessing imagery of movement: The vividness of movement imagery questionnaire (VMIQ). Journal of Mental Imagery, 10, 23-30.
Jacobson, E. (1931). Electrical measurements of neuromuscular states during mental activities. American Journal of Physiology, 96, 115-121.
Jowdy, D. P., & Harris, D. V. (1990). Muscular responses during mental imagery as a function of motor skill level. Journal of Sport and Exercise Psychology, 12, 191-201.
Juhasz, J. B. (1972). On the reliability of two measures of imagery. Perceptual and Motor Skills, 35, 874.
Kaufman, G. (1981). What is wrong with imagery questionnaires? Scandinavian Journal of Psychology, 24, 247-249.
Kline, P. (1983). Personality: Measurement and theory. London: Hutchinson.
Kosslyn, S. M. (1985). Mental imagery ability. In R. J. Sternberg (Ed), Human abilities: An information processing approach (pp. 151-172). San Francisco: Freeman.
Kosslyn, S. M. (1990). Mental imagery. In D. N. Osherson, S. M. Kosslyn, & J. M. Hollerbach (Eds), Visual cognition and action, Vol. 2 (pp. 74-97). Cambridge, MA: MIT.
Kosslyn, S. M., Brunn, J., Cave, K. R., & Wallach, R. W. (1974). Individual differences in mental imagery ability: A computational analysis. Cognition, 18, 195-243.
Kosslyn, S. M., Seger, C., Pani, J. R., & Hillger, L. A. (1990). When is imagery used in everyday life? A diary study. Journal of Mental Imagery, 14, 131-152.
Landers, D. M. (1983). Whatever happened to theory in sport psychology? Journal of Sport Psychology, 5, 135-151.
Mc Kelvie, S. J. (1990). The Vividness of Visual Imagery Questionnaire: Commentary on the Marks-Chara debate. Perceptual and Mental Skills, 70, 551-560.
Mc Kelvie, S. J., & Gingras, P. P. (1974). Reliability of two measures of visual imagery. Perceptual and Motor Skills, 39, 417-418.
Mc Kelvie, S. J., & Demers, E. G. (1977, June). Individual differences in visual imagery and memory performance. Paper presented at the 38th annual meeting of the Canadian Psychological Association, Vancouver.
Marks, D. F. (1973). Visual imagery differences in recall of pictures. British Journal of Psychology, 64, 17-24.
Marks, D. F. (1989a). Construct validity of the Vividness of Visual Imagery Questionnaire. Perceptual and Motor Skills, 69, 459-465.
Marks, D. F. (1989b). Bibliography of research utilizing the Vividness of Visual Imagery Questionnaire. Perceptual and Motor Skills, 69, 707-718.
Martens, R. (1982). Imagery in sport. Paper presented at the Medical and Scientific Aspects of Elitism in Sport Conference, Brisbane, Australia.
Matlin, M. W. (1989). Cognition. (2nd ed.). New York: Holt Rinehart & Winston.
Moran, A. (1991, September 12). Measuring the mental imagery skills of athletes: A psychometric evaluation of available tests. Paper presented at VIII European Congress of Sport Psychology, Deutsche Sporthochschule Koln, Koln, Germany.
Mueller, D. J. (1986). Measuring social attitudes: A handbook for researchers and practitioners. New York: Teachers College.
Murphy, S. M. (1990). Models of imagery in sport psychology. Journal of Mental Imagery, 14, 153-172.
Murphy, S. M., Jowdy, D. P., & Durtschi, S. K. (1989). Report on the United States Olympic Committee survey on imagery use in sport: 1989. Unpublished research report, U. S. Olympic Committee, Colorado Springs, CO.
Ostrow, A. C. (1990). (Ed). Directory of psychological tests in the sport and exercise sciences. Morgantown, WV: Fitness Information Technology.
Paivio, A. (1971). Imagery and verbal processes. New York: Holt Rinehart and Winston.
Paivio, A., & Harshman, R. (1983). Factor analysis of a questionnaire on imagery and verbal habits and skills. Canadian Journal of Psychology, 37, 461-483.
Poltrock, S. G., & Brown, P. (1984). Individual differences in visual imagery and spatial ability. Intelligence, 8, 93-138.
Porter, K., & Foster, J. (1988, January). In your mind's eye. World Tennis, pp. 22-23.
Pylyshyn, Z. W. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological Review, 88, 16-45.
Richardson, A. (1969). Mental imagery. New York: Springer.
Richardson, J. T. E. (1988). Vividness and unvividness: Reliability, consistency and validity of subjective imagery ratings. Journal of Mental Imagery, 12, 115-122.
Sheehan, P. W. (1967). A shortened version of Betts' questionnaire upon mental imagery. Journal of Clinical Psychology, 23, 386-389.
Sheehan, P. W., Ashton, R., & White, K. (1983). Assessment of mental imagery. In A. A. Sheikh (Ed.), Imagery: Current theory, research and application (pp. 189-221). New York: John Wiley.
Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701-703.
Smith, D. (1987). Conditions that facilitate the development of sport imagery training. The Sport Psychologist, 1, 237-247.
Solso, R. L. (1991). Cognitive psychology (3rd ed). Boston: Allyn & Bacon.
Vandenberg, S., & Kuse, A. R. (1978). Mental rotations: A group of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599-604.
Weinberg, R. S. (1988). The mental advantage: Developing your psychological skills in tennis. Champaign, Illinois: Leisure.
White, K., Sheehan, P. W., & Ashton, R. (1977). Imagery assessment: A survey of self-report measures. Journal of Mental Imagery, 1, 145-147.
|Printer friendly Cite/link Email Feedback|
|Publication:||Journal of Sport Behavior|
|Date:||Sep 1, 1993|
|Previous Article:||Normative rules among umpires: the "phantom tag" at second base.|
|Next Article:||Parent and sport socialization: views from the achievement literature.|