Computer anxiety: a comparison of pen-based personal digital assistants, conventional computer and paper assessment of mood and performance.
As the rapid development of information technology has brought, and continues to bring, powerful transformations upon psychological testing, it becomes increasingly important to establish what psychological dimensions may contribute to possible effects of administration medium on psychological testing. Concern for possible effects of the means of collecting data upon that which is collected has a fairly long history (Evan & Miller, 1969). The emphasis on medium effect has been prompted largely by the recent computerization of paper instruments, as illustrated by the American Psychological Association's (1986) Guidelines for Computer-Based Tests and Interpretations: 'When interpreting scores from the computerized versions of conventional tests, the equivalence of scores from computerized versions should be established and documented before using norms or cutting scores obtained from conventional tests' (p. 18).
For the administration modes of an instrument to be considered equivalent, they must produce equal mean scores, comparable distribution and ranking of scores, and correlate to a similar degree with scores on other variables (Hofer & Green, 1985). Such mean score differences have traditionally been the primary focus in examining possible response differences between computerized testing and its paper-and-pencil counterparts (see Honaker, 1988; Mead & Drasgow, 1993). In addition to mean score differences, cross-mode equivalence can be evaluated in terms of 'relativity of equivalence' (Honaker, 1988). That is, regardless of group or population membership, individual participant characteristics may differentially affect a person's responses to a particular administration mode of an instrument. For example, individuals who are uncomfortable with computer use may show negative reactions to a computerized test, but no such reactions to a conventional one.
Investigators have employed a variety of constructs and measures to examine individual's discomfort with computer use (see Rosen, Sears & Weil, 1987). People who have negative feelings such as anxiety, fear and apprehension when working with computers are usually characterized as having computer anxiety (Glass & Knight, 1988; Heinssen, Glass & Knight, 1987). Brosnan & Davidson (1994), in a recent review, have concluded that between one quarter and one third of the population can be characterized as suffering to some extent from computer anxiety or computer phobia. Negative affect, such as fear or anxiety, may interfere with cognitive processing, and its influence on psychometric assessment has been discussed by Messick (1985). Persistent and intense affect can lead to a simplification of conceptual responding, heightened polarization and extremity of judgment and pre-emption of attention, working memory and processing resources.
Individual differences in computer anxiety, therefore, may contribute to nonequivalent results in comparing a computerized to a conventional mode of a test. This could potentially affect both mood and cognitive performance, though most work in this area has concerned mood. For example, George, Lankford & Wilson (1992) reported a significant correlation between computer anxiety and self-reported severity of negative mood measured by computers, while no such correlation was found in the paper version of the test. Their results suggest that computer anxiety is an important psychological dimension contributing to non-equivalence.
More recently, Tseng, Macleod & Wright (1997) investigated the relationship between individual characteristics and self ratings of mood change in either the computer or paper format. Individual characteristics measured in their study included computer anxiety and 'private self-consciousness' (Fenigstein, Scheier & Buss, 1975), which has consistently been shown to correlate with self-ratings of mood change measured in standard paper assessment in the literature (see Gibbons, 1990 for a review). These measures have been shown to be stable characteristics of an individual, and thus represent trait measures (Tseng, 1995). The results showed that self-rating of mood change from the two modalities correlated divergently with measures of individual characteristics. Affective ratings administered via computers were found to correlate with the measure of 'computer anxiety', but this was not so for those measured by the paper format. The converse relationship was revealed in the measure of 'private self-consciousness'. That is, mood scores measured on paper correlate with individual differences of private self-consciousness, but no such relationship is found when measured by computers. These studies suggested that possible interactions of individual characteristics and administration formats needed to be addressed when a new medium for test administration is developed. If scores from the administration formats in question covary differently with an external variable, then it is likely that equivalence of the formats will vary as a function of that variable.
The present study was designed to evaluate the use of the newly developed pen-based devices such as Personal Digital Assistants (PDAs) in psychological testing and consists of a partial replication and extension of the study of Tseng et al. (1997). PDAs have advantages in portability, but also provide a more natural interface than that of a conventional computer with keypad, mouse and monitor. The devices may be programmed to allow entry of data in a variety of forms and seem to be highly acceptable to a wide range of individuals. For example, patients in a clinical trial tended to prefer PDAs to paper for entry of diary and questionnaire data, rated such devices as easy to use, and elderly participants were as happy using them as were the young (Drummond, Ghosh, Ferguson, Brackenridge & Tiplady, 1995; Tiplady, 1994). Data from the diary study also suggested an improvement in data quality with PDAs (Tiplady, Crompton & Brackenridge, 1995). Although a survey of attitudes to technology was conducted in these studies, and showed no relationship between comfort with technology and preference for PDA or paper, no examination of whether individual characteristics such as computer anxiety contribute to the test results was carried out.
The objectives of the present study were (1) to assess the effects of individual differences in trait measures of computer anxiety on mood (as previously demonstrated by Tseng et al., 1997) and on tests of cognitive function, using automated and pencil-and-paper assessments, and (2) to compare assessment of mood and cognitive functions using a PDA with computer-based assessment and with traditional paper methods. In both cases it was differential non-equivalence that was of particular interest, that is an interaction between method of presentation and individual differences. Systematic differences between methods of assessment are of less concern in the present context, though it is important to document such effects so that norms may be adjusted appropriately.
Design of the study
The study used a group comparative design in which participants were allocated to one of three groups: paper, computer and PDA. The primary outcome measures were the correlations between ratings of computer anxiety, and measures of mood and cognitive function.
A total of 136 paid participants (61 males and 75 females) aged 16-60 years were drawn from several sources: 27 (19.8 per cent) were from Astra's pool of research volunteers, 26 (19.1 per cent) were recruited from university employees, 41 (30.2 per cent) were school students and 42 (30.9 per cent) were undergraduates.
Allocation of participants to each condition used a randomization stratified by gender and recruitment source. Thus, 27 females and 20 males (mean age 26.6) were assigned to the computer condition, 23 females and 20 males (mean age 27.1) to the Newton PDA condition, and 25 females and 21 males (mean age 27.8) to the paper condition. More than two-thirds of the participants were aged between 16 and 30 years old.
Computer tests were administered using a Macintosh Colour Classic microcomputer with a keyboard and a mouse. Procedures were programmed in the HyperTalk programming language of the Hypercard system. The Apple MessagePad 110 was used for the administration of the PDA tests. Newton Toolkit development software was used for programming the PDA tests.
Each measurement was accompanied with instructions so that participants were able to carry out the procedures as their own pace. On the computer and Newton mode, items were presented sequentially on screen, one at a time. The software was designed to ensure that the test procedure of the computerized version of the tests was as similar to the paper version as possible.
The paper-and-pencil tests were administered by an experimenter, who used a stop-watch to time the cognitive tests.
The following test measures were administered using the allocated medium (computer, PDA or paper).
Visual Analogue Mood Scales (V AMS). The V AMS consisted of eight items: (1) cheerful and happy, (2) energetic and active, (3) sociable and friendly, (4) depressed and unhappy, (5) fatigued and tired, (6) tense and anxious, (7) irritable and (8) concentration.
Each item was presented together with a 100-mm continuous line either on a computer screen or on paper, whereas a shorter line (40 mm) was used for the PDA mode. Participants were asked to indicate an appropriate position on the line to represent the degree of that mood experienced at that moment. The response was made by making a mark across/he line (paper and PDA) or by pointing and clicking with the mouse (computer). By altering the order of items in the original V AMS, an alternative form was devised for measuring mood states after completion of the cognitive tasks. Ratings were scored as percentages of the scale length (thus corresponding to mm for the conventional 100-mm lines).
Visual search. Participants searched arrays of letter shapes to locate a single target (L) against a background of non-targets. There were two types of array: in the first the non-targets were Xs, in the second the non-targets were Ts. Both target and non-targets could appear in any orientation. Array sizes were 4, 9, 16 or 25 letters, and 10 presentations were made at each array size for each type (blocked). In the automated conditions, stimulus arrays were presented at intervals of 3 seconds. When participants detected the target, they pointed to do it with the mouse (computer) or tapped it with a pen (PDA). In the paper condition, arrays were presented 10 per sheet, and participants crossed out each target with a pencil. For the automated conditions, the mean of the correct response times for each array size and type were taken, omitting the first two responses. For paper, the total time for each page of 10 arrays was divided by 10. Errors were also recorded. Previous work has shown that with X as distractor a target L 'pops out', and that response time shows little effect of array size, while for L against T this does not occur, and response time increases approximately linearly with array size (Hess, Lieb & Schuttler, 1992).
Sentence verification. Participants were presented with 50 sentences, such as 'Pigs have legs' or 'Bicycles have wings'. In the automated conditions, sentences were presented one at a time on the screen, and participants recorded whether the sentences were true or false by clicking a mouse button (computer) or tapping on a screen button (PDA). In the paper condition, sentences were printed 25 per page, and participants marked a box to indicate true or false. For the automated conditions, the means of the correct response times were taken, omitting the first two responses. For paper, the total time for each set of 50 sentences was divided by 50. Errors were also recorded. The original pencil-and-paper version of this test was described by Baddeley (1981).
Verbal memory. Participants were shown two lists of seven words, separated by other tests, and asked to remember as many as possible. After a further interval they were presented with a list of all the words from the two lists in random order, and asked to recall in which list each word had been presented. Responses were obtained in a similar fashion to sentence verification.
The following tests were administered using pencil-and-paper to all participants.
Computer Anxiety Rating Scale (CARS). CARS was developed by Heinssen et al. (1987). It consists of 19 Likert-type statements concerning computer use in which nine are positively worded and 10 negatively worded. The scale has high internal consistency, and is reliable and stable over a test-retest interval of four weeks.
Computer use questionnaire (CUQ). This questionnaire consists of three questions concerning how frequently a computer is used at work, at school or at home. The questions are rated on a scale from 'everyday or nearly everyday' to 'never use computers'. The questions are as follows: 'How often do you use a computer at work or school?',' How often do you use a computer at home using applications like word processing, spreadsheets, drawing or educational programs?' and 'How often do you use a computer to play games?'
Self-consciousness Scale. The scale was developed by Fenigstein et al. (1975) to measure individual differences in chronic predisposition to be self-attentive. This scale consists of 23 Likert-type statements. Each statement is rated on a five-point scale ranging from 0 = 'extremely uncharacteristic' to 4 = 'extremely characteristic'. Factor analyses of the scale have revealed two separable aspects of self-consciousness: 'private' and 'public', and one subscale of 'social anxiety'.
Participants were first instructed in the use of the allocated medium, and entered their study reference numbers, gender and age on that medium. They then completed the assessments on the allocated medium, in the following order: V AMS 1; Verbal Memory List 1; visual search; Verbal Memory List 2; sentence verification; verbal memory recall; V AMS 2. Finally, all participants were asked to fill in the SCS, CARS and CUQ using paper.
Previous work using a principal component analysis (Tseng, Wright & MacLeod, 1992) showed that two summary scores could be derived by linear combination of the individual variables in the V AMS, viz: 'positive mood' (i.e. happiness, activation and sociability), and 'negative mood' (i.e. depression, anxiety, irritability and fatigue).
Pearson product moment correlation analyses were conducted in order to determine the cross-mode differences on the relations of mood scores and individual differences of computer anxiety and private self-consciousness. Participants' computer anxiety and private self-consciousness scores were correlated with their summary mood scores of the first and second V AMS and with their cognitive test performance. Comparisons between correlation coefficients were performed as described by Blalock (1960).
Individual characteristics and mood assessment
The groups were comparable as regards the questionnaires on computer use, and the trait scales, as shown in Table 1. The mean scores for the ratings of mood are shown in Table 2. No significant differences were seen between treatment groups on any mood measure overall.
Table 1. Mean scores for trait variables and computer use (standard deviations in parentheses) Conditions Scale Computer Newton Paper Computer use (CUS) 2.74 (1.05) 2.83 (1.01) 2.62 (1.11) Computer anxiety (CARS) 43.4 (12.4) 41.5 (13.3) 43.8 (15.5) Private self-consciousness 24.5 (9.2) 23.0 (7.8) 23.7 (5.4)
Correlations between participants' computer anxiety scores and self-reported mood scores are shown in Table 3. It can be seen that the most marked correlations are found in the computer group. Broadly similar patterns of correlation are seen for the straightforward correlation coefficients and after partialling out the effects of experience with computers. In the case of mood, as the levels of computer anxiety increase, mood scores with negative valence increase, while positive mood decreases, when measured via computers.
One significant relationship between computer anxiety and mood was seen for the PDA (positive mood, second assessment, after partialling out the effects of computer use) but none for paper. On the other hand, the analyses of correlations between dispositional private self-consciousness and self-reported mood revealed that the correlations with positive mood when measured by the paper format were significant, but for this measure no significant correlations were found in the groups tested on the computer or PDA.
When comparisons of the correlation coefficients were made between groups, no significant differences were found.
[TABULAR DATA FOR TABLE 2 OMITTED]
Table 3. Pearson correlation coefficients and partial correlations(a) between CARS scores and self-reported mood scores (SCS) Computer Newton Paper Mood scores CARS SCS CARS SCS CARS SCS First V AMS Positive mood -.17 .03 -.24 .18 .07 -.38(**) .22 .03 -.29 .20 .07 -.38(*) Negative mood .35(*) .13 .30 .04 .17 .07 .30(*) .11 .32 .01 .09 .13 Second V AMS Positive mood -.37(*) .05 -.27 .16 .08 -.29 -.48(***) .04 -.45(**) .13 -.07 -.33(*) Negative mood .39(**) .04 .24 -.24 .24 .12 .44(**) .04 .30 -.26 .25 .22 * p [less than] .05; ** p [less than] .01; *** p [less than] .001 (coefficient significantly different from zero). a Partial correlations allowing for the values of the Computer Use Scale are in italic.
Data concerning the cross-mode equivalence of cognitive performances are presented in Table 4. The mean score comparisons revealed no significant differences in performance between conditions for the sentence verification task or for the verbal memory task. On the other hand, the analyses of reaction time on the visual search task showed significant cross-mode differences. Mean reaction times obtained from the computer and PDA were significantly longer than those found with paper. Error scores from this task did not differ significantly between modes of administration.
The relationship between array size and response times for the visual search task are illustrated in Fig. 1. It can be seen that the slopes for the two display conditions (non-target: T or X) are closely similar for the three media, in spite of the shorter average response times for paper.
Correlations for the cognitive test measures are presented in Table 5. No evidence [TABULAR DATA FOR TABLE 4 OMITTED] was found indicating non-equivalence for the verbal memory task. On the sentence verification task, participants' computer anxiety significantly correlated with their performance for computer and paper, but no such relationship was found on the PDA mode. For the visual search task, reaction time showed a significant correlation with computer anxiety for computer, but not for paper or PDA. The error scores for visual search did not display any such effects.
Table 5. Pearson correlation coefficients and partial correlations(a) between CARS scores and cognitive performances Computer Newton Paper Cognitive tasks CARS CARS CARS Visual search(b) Mean response time (s) .48(**) -.02(***) .28 .25 .06 .27 No. of errors .04 .18 .01 .17 .09 .00 Sentence verification Mean response time (s) .31(*) .10 .29 .08 .01 .12 No. of errors .05 -.05 .38(**) .06 .09 .32(*) Verbal memory No. of correction -.19 .16 -.07 -.09 -.11 -.06 * p [less than] .05; ** p [less than] .001 (coefficient significantly different from zero); *** p [less than] .05 (significantly different from computer condition). a Partial correlations allowing for the values of the Computer Use Scale are in italic. b Data from one participant (313, paper) who made 16 errors (all in X vs. T) were excluded. (No other participant made more than three errors overall.)
When the effects of computer experience were partialled out, these correlations in the computer group became less marked, and were no longer significant. All the significant correlations are in the direction of poorer performance (slower reaction times, or more errors) for participants with greater computer anxiety. When comparisons were made between the various correlation coefficients, the correlation for visual search reaction time (unpartialled) was found to be significantly greater for the computer group than for the PDA group.
These results first of all lend support to the previous findings of Tseng et al. (1997), that mode of administration can affect measures of mood. Although the differences between conditions noted here were not statistically significant, the magnitudes of the correlations found were similar to those found in the previous work. Moreover, the correlations between trait computer anxiety and mood in the computer group and that for private self-consciousness and mood in the paper group were significantly greater than zero. The observed correlations indicate that computer anxiety was associated with less positive and more negative ratings of mood when administration was by computer. As in the Tseng et al. study, no such relationship was found for paper administration for mood scales, and private self-consciousness showed an inverse pattern, with consciousness showed an inverse pattern, with participants rating high on this measure showing less positive ratings of mood on paper, but not on computer.
Partialling out the effects of computer experience did not have a substantial effect on either the magnitude or the statistical significance of the correlations with mood. This indicates that although computer experience showed the expected negative correlation with computer anxiety it does not have any marked effect on the correlations with mood independently of computer anxiety.
The pattern found with the PDA group for mood ratings is in general similar to that found for the computer group (although the correlations for negative mood at the second rating appear to be lower, and closer to the figures for paper). This suggests that PDAs may interact with mood measures in a similar way to conventional computers.
The assessment of cognitive function showed a different, and rather more complex picture. There was clear evidence for a correlation between computer anxiety and the two speeded measures (visual search and sentence verification times) when assessment was by computer. This relationship was positive, so the greater the computer anxiety the slower the responses. There was no evidence of any relationship between test performance and computer anxiety in the PDA group, suggesting that PDAs may have advantages over conventional computers in this area. For the paper group, there were trends in the same direction as for the computer group, with a significant correlation for sentence verification errors. It is not clear why such a relationship should be found, and this observation requires further investigation.
When the effects of computer experience were partialled out, by contrast with the situation with the mood scores, the effects with the speeded measures in the computer group were much less marked. This suggests that much of the correlation found is due not to computer anxiety per se, but to experience, possibly relating to the skills required in using a mouse to make the responses. If this is the case, the advantage noted for the PDA would relate to the ease of use of the interface for computer-naive participants, rather than to the device arousing less negative mood.
In comparing cognitive tests between different media, the overall performance is also important. For sentence verification and verbal memory, the mean scores obtained on three media are comparable. For visual search, however, participants took significantly less time on paper than on the other two media. This is probably due to the fact that each page of the paper test had 10 stimulus arrays on it. Thus, participants could look ahead to the next array while marking the current one. In both computer and PDA implementations, a new stimulus array appeared only after completion of the previous response.
Another aspect of the visual search task is the difference between parallel search ('pop-out') in which the time taken to respond to the target is more or less independent of the array size, and serial search, where the response time increases directly with array size (Calvert & Troscianko, 1992; Hess et al., 1992). From Fig. 1 it is clear that such an increase in response time occurs with L against T, but not for L against X, and that this difference is similar for all three media. In this respect then, the test paradigm appears not to be affected by mode of administration.
The confirmation of the impact of mode of administration on mood assessment, and the extension of these findings to more objective cognitive assessments has important implications for the use of information technology in psychological testing. Computerization brings a number of important advantages, in particular the standard of administration and the ability to time individual responses (Ryman et al., 1988; Glaze & Cox, 1991; Maruff et al., 1994). In addition, portable devices such as PDAs can be used in participants' homes and carried around, allowing collection of detailed information on fluctuations of mood or symptomatology (Burnett, Taylor & Agras, 1985; Drummond et al., 1995). However, medium effects do need to be considered when designing tests and in interpreting scores, particularly where norms are derived from different media. Well-designed user interfaces may help to reduce such problems, particularly if they are easy to use by those with no prior experience of the device being used.
In conclusion, the method used to administer psychological tests can interact with individual differences, particularly in computer anxiety, leading to a differential non-equivalence of the test methods. Pen-based devices such as PDAs may offer some useful advantages over conventional computers in this respect.
American Psychological Association (1986). Guidelines for Computer-based Tests and Interpretations. Washington, DC: APA.
Baddeley, A.D. (1981). The cognitive psychology of everyday life. British Journal of Psychology 72, 257.
Blalock, H. M. J. (1960). Social Statistics, 2nd edn. New York: McGraw-Hill.
Brosnan, M. J. & Davidson, M. J. (1994). Computerphobia - Is it a particularly female phenomenon? The Psychologist, 7, 73-78.
Burnett, K. F., Taylor, B. & Agras, W. S. (1985). Ambulatory computer-assisted therapy for obesity: A new frontier for behavior therapy. Journal of Consulting and Clinical Psychology, 53, 698-703.
Calvert, J. & Troscianko, T. (1992). The role of dopamine in visual attentional processes. Journal of Psychopharmacology, 6, 103.
Drummond, H. E., Ghosh, S., Ferguson, A., Brackenridge, D. & Tiplady, B. (1995). Electronic quality of life questionnaires: A comparison of pen-based electronic questionnaires with conventional paper in a gastrointestinal study. Quality of Life Research, 4, 2-7.
Evan, W. M. & Miller, J. R. (1969). Differential effects on response bias of computer vs. conventional . administration of a social science questionnaire: An exploratory methodological experiment. Behavioural Science, 14, 216-227.
Fairbank, B. A., Tirre, W. C. & Anderson, N. S. (1991). Measures of thirty cognitive tasks: Analysis of reliabilities, intercorrelations and correlations with aptitude battery scores. In P. L. Dann, S. H. Irvine & J. M. Collis (Eds), Advances in Computer-based Human Assessment, pp. 51-102. Dordrecht, The Netherlands: Kluwer Academic.
Fenigstein, A., Scheier, M. F. & Buss, A. H. (1975). Public and private self-consciousness: Assessment and theory. Journal of Consulting and Clinical Psychology, 43, 522-527.
George, C. E., Lankford, J. S. & Wilson, S. E. (1992). The effects of computerised versus paper-and-pencil administration on measures of negative affect. Computers in Human Behavior, 8, 203-209.
Gibbons, F. X. (1990). Self-attention and behaviour: A review and theoretical update. Advances in Experimental Social Psychology, 23, 249-303.
Glass, R. C. & Knight, L. A. (1988). Cognitive factors in computer anxiety. Cognitive Therapy and Research, 12, 351-366.
Glaze, R. & Cox, J.L. (1991). Validation of a computerised version of the 10-item (self-rating) Edinburgh Postnatal Depression scale. Journal of Affective Disorders, 22, 73-77.
Heinssen, R. K., Glass, C.R. & Knight, L. A. (1987). Assessing computer anxiety: Development and validation of the Computer Anxiety Rating Scale. Computers in Human Behavior, 3, 49-59.
Hess, R., Lieb, K. & Schuttler, R. (1992). Is already the preattentive vision disturbed in schizophrenics? Proceedings of the 18th C.I.N.P. Congress, p. 241B. New York: Raven Press.
Hindmarch, I. (1980). Psychomotor function and psychoactive drugs. British Journal of Clinical Pharmacology, 10, 189-289.
Hofer, P.J. & Green, B. F. (1985). The challenge of competence and creativity in computerized psychological testing. Journal of Consulting and Clinical Psychology, 53, 826-838.
Honaker, L. M. (1988). The equivalency of computerised and conventional MMPI administration: A critical review. Clinical Psychology Review, 8, 561-577.
Maruff, P., Wood, S., McArthur-Jackson, C., Malone, V. & Benson, E. (1994). Computer-administered visual analogue mood scales - rapid and valid assessment of mood in HIV-positive individuals. Psychological Reports, 74, 39-42.
Mead, A.D.& Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests. A meta-analysis. Psychological Bulletin, 114, 449-458.
Messick, S. (1985). Response to changing assessment needs - Redesign of the national assessment of educational progress. American Journal of Education, 94, 90-105.
Popham, S. M. & Holden, R. R. (1990). Assessing MMPI constructs through the measurement of response latencies. Journal of Personality Assessment, 54, 469-478.
Roper, B. L., Ben-Porath, Y. S. & Butcher, J. N. (1991). Comparability of computerised adaptive and conventional testing with the MMPI-2 Journal of Personality Assessment, 57, 278-290.
Rosen, L.D., Sears, D.C.& Weil, M. M. (1987). Computerphobia. Behaviour Research Methods, Instruments and Computers, 19, 167-179.
Ryman, D. H., Naitoh, P., Englund, C. & Genser, S. G. (1988). Computer response time measurements of mood, fatigue and symptom scale items: Implications for scale response time users. Computers in Human Behavior, 4, 95-109.
Swanston, M., Abraham, C., Macrae, W. A., Walker, A., Rushmer, R., Elder, L. & Methven, H. (1993). Pain assessment with interactive computer animation. Pain, 53, 347-351.
Temple, D. E. & Geisinger, K. F. (1990). Response latency to computer-administered inventory items as an indicator of emotional arousal. Journal of Personality Assessment, 54, 289-297.
Tiplady, B. (1994). The use of personal digital assistants in performance testing in psychopharmacology. British Journal of Clinical Pharmacology, 37, 523.
Tiplady, B., Crompton, G. K. & Brackenridge, D. (1995). Electronic diaries for asthma. British Medical Journal, 310, 1469.
Tseng, H. M. (1995). Computer anxiety and computerised assessment of mood change. PhD thesis, University of Edinburgh.
Tseng, H. M., Macleod, H. A. & Wright, P. (1997). Computer anxiety and measurement of mood change. Computers in Human Behavior 13, 305-316.
Tseng, H. M., Wright, P. & Macleod, H. A. (1992). Computerised and conventional mood assessment across the menstrual cycle. In Abstracts Annual Conference of Society for Reproductive and Infant Psychology. University of Strathclyde, Scotland. London: Society for Reproductive and Infant Psychology.
Vansickle, T. R. & Kapes, J. T. (1993). Comparing paper-pencil and computer-based versions of the Strong-Campbell Interest Inventory. Computers in Human Behaviour, 9, 441-449.
|Printer friendly Cite/link Email Feedback|
|Author:||Tseng, Hsu-Min; Tiplady, Brian; Macleod, Hamish A.; Wright, Peter|
|Publication:||British Journal of Psychology|
|Date:||Nov 1, 1998|
|Previous Article:||Ageing and word processing competence: compensation or compilation?|
|Next Article:||Gender differences in the dual-task effects on autobiographical memory retrieval during social problem solving.|