A comparison of two measures of computer self-efficacy.
Computer self-efficacy is defined as "... a judgment of one's capability to use a computer." (Compeau & Higgins, 1995: p. 192). The computer self-efficacy (CSE) construct has provided insight into factors affecting skill development and the motivation to use computers (Marakas, Yi, & Johnson, 1998). Previous research, for example, indicates CSE plays a significant role in the development of computer skills (Gist, Schwoerer, & Rosen, 1989), an individual's decision to use computers (Compeau & Higgins, 1995), expectations of success with computers (Compeau, Higgins, & Huff, 1999), and computer-dependent course performance (Karsten & Roth, 1998a). While the early research effort involving CSE has been fruitful and informative, the identification and measurement of CSE is of research and practical concern (Marakas et al., 1998). This concern has led to a call for additional research that leads to improved measurement of CSE, and better understanding of the nature of the CSE construct (Marakas et al., 1998).
This study answers this call through a comparison of two popular measures of CSE. A review of the literature indicates that versions of the CSE scales developed independently by Murphy, Coover and Owen (1989) and Compeau and Higgins (1995) have been two of the most frequently employed measures in CSE studies conducted to date (Marakas et al. 1998). Though both measures attempt to capture the same construct (Marakas et al., 1998), visual inspection of the respective measures (see appendix) suggests obvious differences in approach to CSE assessment. Murphy et al. (1989) measure CSE as an individual's perceptions of his or her ability to accomplish specific tasks and activities involved in operating a computer. Compeau & Higgins (1995), on the other hand, assess CSE as an individual's perceptions of his or her ability to use a computer application in the accomplishment of a job.
Though both measures have provided meaningful insights into computing behavior, it is reasonable to ask if the two instruments capture the same CSE construct. Answering this question appears likely to benefit both the research and applied communities. From a research perspective, a comparison of measures is a step toward improving and refining measurement of the CSE construct. From the applied perspective, such a comparison should assist computer educators and trainers in the selection of the most relevant and informative available tool for assessing the computer skills and behaviors of students and trainees. Therefore, this study has several purposes: (a) to directly compare the two measures to determine if they are capturing the same construct, (b) to compare the nature of the information each measure supplies relevant to the assessment and understanding computing skills and behaviors (c) to identify any apparent advantages or disadvantages in administration or analysis.
The paper is organized as follows. A brief overview of the self-efficacy and computer self-efficacy constructs is provided first. A review of the CSE literature relevant to the comparison of two CSE measures investigated is presented next. Study methodology is then described, followed by the presentation and discussion of study results. Study conclusions, limitations, and directions for future research complete the paper.
SELF-EFFICACY AND COMPUTER SELF-EFFICACY
Computer self-efficacy is based on self-efficacy, a well-established construct with its origins in Social Cognitive Theory (Bandura, 1986). Self-efficacy is the belief one has capability to perform a specific task (Bandura, 1997). Individuals who perceive themselves capable of performing certain tasks or activities are defined as high in self-efficacy and are more likely to attempt and execute these tasks and activities. People who perceive themselves as less capable are less likely to attempt and execute these tasks and activities, and are accordingly defined as lower in self-efficacy (Barling & Beattie, 1983; Bandura, Adams, & Beyer, 1977).
Self-efficacy has three dimensions. Magnitude refers to the level of task difficulty that individuals believe they can attain. Strength indicates whether the conviction regarding magnitude is strong or weak. Generality describes the degree to which the expectation is generalized across situations (Gist, 1987). Estimations of self-efficacy are formed through a dynamic weighing, integration, and evaluation of complex cognitive, social, and personal performance experiences (Marakas et al., 1998). It is important to note that self-efficacy involves more than skill assessment. Self-efficacy reflects not only a perception of one's ability to perform a particular task based on past performance or experience but also forms a critical influence on future intentions. (Bandura, 1997). Studies across a wide range of research domains have consistently found self-efficacy to be a strong predictor of subsequent task-specific performance. Readers are referred to Bandura (1997) and Gist and Mitchell (1992) for a more thorough review of the self-efficacy literature.
The insights offered by self-efficacy research, coupled with an increased interest in the various behavioral factors that can affect computer use and performance, has encouraged the identification and measurement of CSE (Compeau & Higgins, 1995; Hill, Smith & Mann, 1998; Murphy et al., 1989). While a review of the literature indicates that most researchers agree that CSE refers to a judgment of one's ability to use a computer, there is less agreement regarding its measurement. In a meticulous review of forty studies involving the CSE construct, nearly half employed self-developed measures of CSE (Marakas et al., 1998). The measures reviewed varied in item number and content, level and dimensions of CSE assessed, and evidence of formal validation activities. Each of these characteristics is addressed in more detail in the discussion of the two CSE measures that follows. Readers are referred to Marakas et al. (1998) for a relatively recent and thorough review of the identification and measurement of the CSE construct.
THE CSE MEASURES
Both the Compeau and Higgins (1995) and Murphy et al. (1989) measures assess general CSE, defined as "... an individual's judgment of efficacy across multiple computer domains." (Marakas et al., 1998: p. 129). General CSE can be thought of as a collection of all specific CSEs (e.g., word processor CSE, spreadsheet CSE, etc.) accumulated over time. As such, it may be most useful as a predictor of general performance within the diverse domain of computer-related tasks and activities (Marakas et al., 1998).
Initial examination of the two measures suggests they go about capturing CSE in decidedly different ways. The measures differ in item number, item content, and the dimensions of self-efficacy assessed. Examples of each measure are provided in the appendix. The development and initial test of the Compeau & Higgins (CHCSE) measure was originally reported in a 1995 study. It consists of ten items, ordered in ascending level of difficulty, which ask if an individual could complete an undefined task using an undefined software package. For affirmative (YES) responses, individuals are then asked to indicate their confidence in that conviction on a 10-point, Likert-type confidence scale (scale anchors: 1 = "not at all confident", 10 = "totally confident). The measure captures two dimensions of CSE. CSE magnitude is operationalized as the number of "YES" answers to confidence scale items. CSE strength can be operationalized as the sum or mean of the responses on the confidence scale, counting 0 for each "NO" response.
The development and initial test of the Murphy et al. (MCSE) measure was originally reported in a 1989 study. The MCSE measure consists of thirty-two items that reflect a variety of computer-related skills and knowledge. Each item begins with the statement "I feel confident...".
Individuals are asked to indicate their level of confidence via a 5-point Likert-type scale (scale anchors: 1 = "strongly disagree", 5 = "strongly agree). The MCSE measure captures the CSE dimension of strength only. CSE strength can be operationalized as the sum or mean of the responses to the scale items.
Despite the apparent differences in measurement approach, both measures have provided insights into computing behaviors and reasonable evidence of formal validation. The Compeau and Higgins (1995) study that developed and tested the CHCSE measure found significant relationships between CSE and an individual's outcome expectations regarding computer use, emotional reactions to computers, and actual computer use. CSE was positively related to expectations of computer success, attitudes toward computers, and actual computer use. CSE was negatively related to computer anxiety (Compeau & Higgins, 1995). A recent, longitudinal study of individual reactions to computing study that employed the same CHCSE measure confirmed many of the original study findings, and demonstrated that CSE remained a strong predictor of an individual's affect and computer use one year later. Compeau and Higgins have offered evidence of the construct validity of their CSE measure. The measure has demonstrated high internal consistency (reliability), empirical distinctiveness (discriminant validity), and was related as predicted to other constructs (nomological validity) (Compeau & Higgins, 1995, 1999).
The Murphy et al. (1989) study that developed and tested the MCSE measure found that the 32 scale items loaded on three factors: beginning level, advanced level, and mainframe skills. While subsequent analysis of the measure provides mixed support for the three factors originally identified (Marakas et al., 1998), the MCSE measure has provided insight into computing behaviors of use and interest to computer educators and trainers. Harrison & Ranier (1992) found that age was negatively related to CSE, computer experience was positively related to CSE, and males had significantly higher levels of CSE than females CSE (Harrison & Ranier, 1992a). Subsequent studies have shown that participation in semester-long computer training can increase an individual's level of CSE regardless of age or gender (Karsten & Roth, 1998a, 1998b; Torkzadeh & Koufterous, 1994). A study that employed the MCSE measure also found a significant and positive relationship between CSE and computer-dependent course performance (Karsten & Roth, 1998a).
Studies employing the MCSE measure have also found CSE to be negatively related to computer anxiety and positively related to affirmative attitudes toward computers (Harrison & Ranier, 1992b). Such findings are offered as evidence of concurrent validity (Harrison & Ranier, 1992b). The MCSE measure has consistently demonstrated high internal consistency (reliability) as well (Murphy et al., 1989; Harrison and Ranier, 1992a, 1992b; Torkzadeh & Koufterous, 1994).
In sum, the CHCSE and MCSE measures differ in content, length, and CSE dimensions assessed. In spite of these apparent differences, both measures have been employed with some success in multiple studies, and reasonable evidence in support of the construct validity of each measure has been offered. A review of the literature, however, suggests that a test of the convergent validity of the measures has yet to be conducted. (Compeau & Higgins, 1995). The methodology employed in the comparison is presented next.
Research subjects consisted of students enrolled in an introductory to information systems course at a midwestern university. Participation was voluntary, and confidentiality was guaranteed. The survey questionnaire was administered the first day of class, before computer training commenced. The questionnaire collected two types of demographic data, AGE and GENDER, that have been significantly associated with CSE in previous research (Harrison & Ranier, 1992a; Marakas et al. 1998). In addition, three separate measures of self-reported computer use were collected: total years of computer experience (COMPEX), the number of prior computer courses completed that included computer training and use (COURSES), and the frequency of normal computer use.daily, weekly, monthly, or less than monthly--prior to enrollment (NORMUSE). Again, all three variables have exhibited significant relationships with CSE in prior studies (Harrison & Ranier, 1992a; Karsten 1998a, 1998b). The categorical variables GENDER and NORMUSE were dummy coded when used in the regression analyses (Pedhazur, 1982).
The questionnaire alternated the presentation of the CSE measures to prevent any ordering effect. The ten-item Compeau & Higgins (1995) measure, slightly modified to acknowledge its use in an academic setting, was originally scored in one of the two possible ways recommended by the authors. CSE magnitude was determined by counting the number of "YES" answers. Summarizing the responses on the confidence scale, counting a zero for a "NO" response assessed CSE strength. Preliminary analysis found the measure of CSE strength to be much more informative than the measure of CSE magnitude. In this study, extensive statistical analysis indicated that the magnitude score did not increase the explanatory power of the instrument when examined alone or in conjunction with CSE strength. Consequently, CSE as measured by the Compeau & Higgins measure (CHCSE) is the computed mean score on the confidence scale items, and reflects CSE strength. A higher mean CHCSE score indicates a higher level of CSE.
The thirty-two item Murphy et al. (1989) measure assesses CSE strength only. CSE strength (MCSE) was the computed mean score on the scale items. In similar fashion to the previous measure, a higher mean MCSE score indicates a higher level of CSE. In sum, both CHCSE and MCSE are measures of CSE strength.
Correlational analysis (Pearson r) was first used to assess the relationship between the two measures. In separate multiple regression analyses, each measure was then regressed on AGE, GENDER, COMPEX, COURSES and NORMUSE to determine the relationship of these variables to CHCSE and MCSE. The results of the independent regression analyses permitted the comparison of the two measures in terms of the amount of variance explained. Where appropriate, t-tests, ANOVA, and post hoc comparisons were also used to determine the significance and direction of the relationships among the study variables of interest.
RESULTS AND DISCUSSION
Both measures demonstrated high internal consistency. The reliability coefficients (Cronbach's alpha) for the CHCSE and MCSE measures were .935 and .965, respectively. As demonstrated in Table 1, the study sample consisted of 176 individuals, 101 males (57.4%) and 75 females (42.6%). The mean age of the respondents was 20.3 years, reflecting the fact that most students in the course that provided the sample enroll in their sophomore and junior year. Table 1 also shows that while students on average reported a considerable number of years of computer experience (Mean = 5.31), over 70% indicated they had taken two or fewer courses providing computer instruction and requiring computer use to date (Mean = 1.14). In regard to normal computer use, 27.8% use a computer on a daily basis, 50% use a computer weekly, 13.6% use a computer on a monthly basis, while 8.6% use a computer less than once a month. Finally, Table 1 displays the scale range and computed overall mean for each measure (CHCSE M = 5.69, SD = 1.89; MCSE Mean = 3.53, SD = .69).
Table 2 presents the study variable correlations. Of importance, the correlation between CHCSE and MCSE scores was significant and positive (r = .741, p < .05). This initial comparison provides evidence a strong relationship does appear to exist between the two measures of CSE. Examination of the bivariate correlations of the remaining non-categorical variables with CHCSE and MCSE indicated neither was significantly correlated with age. A significant, positive correlation between prior computer COURSES and CHCSE (r = .318, p < .05) and MCSE (r = .345, p < .05) respectively was also indicated. Prior computer experience was significantly correlated with the MCSE measure only (r = .152, p < .05).
The results of the separate regression analyses conducted for the CHCSE measure and the MCSE measure are displayed in Table 3 and Table 4, respectively. As shown in Table 3, the independent variables accounted for approximately 29% of the variance in CHCSE (Adjusted R Square = .290, F = 11.21, p < .001). In contrast, Table 4 shows that the same independent variables explained approximately 35% of the variance in MCSE (Adjusted R Square = .350, F = 14.28, p < .001).
While the proportion of variance accounted for by the independent variables differs, the independent variables exhibited similar relationships, in terms of significance, with each measure. Age was not a significant predictor of CHCSE or MCSE, not a surprising finding given the homogeneity of the sample, as demonstrated by the measures of central tendency for AGE provided in Table 1.
Gender, however, was significantly related to both measures. The follow up t-test analyses displayed in Table 5 found significant differences in CSE between males and females on both the measures. Males had significantly higher levels of CSE on the CHCSE (Male Mean = 5.99, SD = 1.88, Female Mean = 5.31, SD = 1.88, p = .020) as well as the MCSE measure (Male Mean = 3.64, SD = .68, Female Mean = 3.38, SD = .69, p = .017). Marakas et al. (1998) have noted that while the majority of CSE studies they reviewed acknowledge the known relationship between gender and CSE, they are often characterized by an imbalance of subjects with regard to gender. Whether the ratio of male subjects (101) to female subjects (75) constitutes an imbalance that contributes the observed gender effect is unknown.
Two of the three measures of computer use, COURSES and NORMUSE, were significant predictors of CHCSE and MCSE. The number of prior courses that required computer training and use, and the frequency of normal computer use were significantly related to both dependent variables. Years of computer experience, which showed a significant, positive bivariate correlation with the MCSE measure may have been subsumed by the other measures of computer use. These results are in keeping with prior research--it is not the quantity of computer experience, but rather the quality of that experience in terms of its influence on efficacy beliefs that appears to matter (Hill et al., 1987; Karsten & Roth, 1998a).
In order to better understand the finding regarding normal computer use, separate ANOVAs were conducted and post hoc Bonferroni pairwise comparisons (Keppel, 1991) were made to determine the source of the significant relationship with the CHCSE and MCSE measures. As Table 6 shows, those respondents who used computers on a daily basis had significantly higher CHCSE scores (Mean = 7.089, SD = 1.557) and MCSE scores (Mean = 4.116, SD = .600, p < .05) than did the members of the other three groups. It is interesting, and CSE research would predict, that declining mean CSE scores correspond with declining frequency of computer use. No significant differences were found in pairwise comparisons of the other group means.
CONCLUSIONS, LIMITATIONS, AND DIRECTIONS FOR FUTURE RESEARCH
This study had several objectives. The first objective was to compare two frequently employed measures of CSE to determine if they capture the same construct. The results of the analyses conducted in this study offers evidence that they do, at least with respect to the sample and study variables employed here. Visual examination of the Compeau and Higgins (1995) measure suggests an apparent focus on efficacy with a computer application. On the other hand, the Murphy et al. (1989) measure appears more concerned with efficacy concerning basic computer skills and knowledge. Yet the analysis suggests both are tapping into what Marakas et al. (1998) refer to as general CSE, an individual's judgment of efficacy across multiple computer domains.
The second objective was to determine which of the two measures is appear to be most practically informative in an applied sense. For the study sample and study variables employed here, the Murphy et al. (1989) measure appeared to have some advantages over the Compeau and Higgins (1995) measure. As reported, the same set of study variables accounted for more variance in the Murphy et al. measure. In addition, a simple inspection of the mean scores for individual items on the Murphy measure offers meaningful insight into activity-specific perceptions of efficacy. Analysis of scale items may permit computer educators or trainers to pinpoint specific training needs (Gist, 1987). On the other hand, the computer application-job accomplishment orientation of the Compeau and Higgins measure might be informative when used to assess the CSE of more sophisticated computer users, or where efficacy in the use of software packages is the prime concern. The final objective was to identify any apparent advantages or disadvantages in measure administration, data collection, or analysis. Both measures take little time to complete. Less than fifteen minutes was required to collect the demographic data and all the measures employed in this study. As mentioned in the methodology section, the Compeau and Higgins measure of CSE magnitude (the number of "YES" responses to confidence scale items) did not yield additional insight and was discarded from this analysis. Since the magnitude dimension of CSE has been valuable in other research (Bandura, 1997), this finding may be limited to this study. On a related matter, however, a change in the manner data is collected is recommended. Rather than requiring "YES" and "NO" responses for each item and subsequently counting "NO" as zero, it would seem more intuitive for the user and an aid to data entry to modify the confidence scales to use a 0 to 10 scale format. The latter format is equivalent and acceptable according to Bandura (1997).
The main concern regarding the Murphy et al. measure would be maintaining its timeliness and relevance to current and future computing skills. Items such as "I feel confident writing simple programs for the computer" or "I feel confident working on a mainframe computer" seem less likely to help define CSE as time passes.
The limitations of this study have already been alluded to, and suggest directions for future research. The study sample consisted of traditional college students, the majority of whom were 19 to 20 years of age. Comparing the measures using a more diverse sample is recommended, and may offer additional insight. In addition, employing other measures of computer use, as well as objective measures of skill and performance in the comparison is desirable. Finally, while the two measures analyzed here have offered meaningful insight into the factors affecting computer use, the continued development, improvement and refinement of these and other measures is necessary to enhance our understanding of the CSE construct.
APPENDIX Computer Self-Efficacy Scale (Murphy, Coover & Owen, 1989) Five-Point Likert-type Scale: 1 = Strongly Disagree, 5 = Strongly Agree I feel confident entering and saving data (words and numbers) into a file. I feel confident calling up a data file to view on a monitor screen. I feel confident storing software correctly. I feel confident handling a floppy disk correctly. I feel confident escaping/exiting from a program or software. I feel confident making selections from an on-screen menu. I feel confident copying an individual file. I feel confident using the computer to write a letter or essay. I feel confident moving the cursor around the monitor screen. I feel confident working on a personal computer (microcomputer). I feel confident using a printer to make a "hardcopy" of my work. I feel confident getting rid of files when they are no longer needed. I feel confident copying a disk. I feel confident adding and deleting information to and from a data file. I feel confident getting software up and running. I feel confident organizing and managing files. I feel confident understanding terms/words relating to computer software. I feel confident understanding terms/words relating to computer hardware. I feel confident describing the function of computer hardware (keyboard, monitor, disk drives, processing unit). I feel confident troubleshooting computer problems. I feel confident explaining why a program (software) will or will not run on a given computer) I feel confident understanding the three stages of data processing: input, processing, output. I feel confident learning to use a variety of programs (software). I feel confident using the computer to analyze number data. I feel confident learning advanced skills within a specific program (software). feel confident using the computer to organize information. I feel confident writing simple programs for the computer. I feel confident using the user's guide when help is needed. I feel confident getting help for problems in the computer system. I feel confident logging onto a mainframe computer system. I feel confident logging off a mainframe computer system. I feel confident working on a mainframe computer. Modified Computer Self-Efficacy Scale (from Compeau & Higgins, 1995) * Often in your courses you are told about software packages that are available to make coursework easier. For the following questions, imagine that you were given a new software package for a course assignment. It doesn't matter specifically what this software package does, only that it is intended to make your assignment easier and that you have never used it before. The following questions ask you to indicate whether you could use this unfamiliar software package under a variety of conditions. For each of the conditions, please indicate whether you would be able to complete the assignment using the software package. Then, for each condition that you answered "yes", please rate your confidence about your first judgment by circling a number from 1 to 10, where 1 indicates "Not At All Confident," 5 indicates "Moderately Confident," and 10 indicates "Totally Confident." For example, consider the following sample item: I COULD COMPLETE THE ASSIGNMENT USING THE SOFTWARE PACKAGE ... NOT AT ALL MODERATELY TOTALLY CONFIDENT CONFIDENT CONFIDENT ... if there was someone YES 1 2 3 4 5 6 7 8 9 10 giving me step-by-step NO instructions The sample response shows the individual felt he or she could complete the assignment using the software with step-by-step instructions (YES is checked) and was moderately confident he or she could do it (5 is checked). ... if there was someone YES 1 2 3 4 5 6 7 8 9 10 around to tell me what to NO do ... if I had never used a YES 1 2 3 4 5 6 7 8 9 10 package like it before NO ... if I had only the YES 1 2 3 4 5 6 7 software manuals for NO reference ... if I had seen someone YES 1 2 3 4 5 6 7 8 9 10 else using it before NO trying it myself ... if I could call YES 1 2 3 4 5 6 7 8 9 10 someone for help if I go NO stuck ... if someone else YES 1 2 3 4 5 6 7 8 9 10 helped me get started NO ... if I had a lot of YES 1 2 3 4 5 6 7 8 9 10 time to complete the NO assignment for which the software was intended ... if I just had a YES 1 2 3 4 5 6 7 8 9 10 built-in help facility NO for assistance ... if someone showed me YES 1 2 3 4 5 6 7 8 9 10 how to do it first NO ... if I had used similar YES 1 2 3 4 5 6 7 8 9 10 packages before this one NO to do the same assignment * Scale items have been compressed for presentation here
Bandura, A. (1997). Self-Efficacy: The Exercise of Control. New York: W. H. Freeman & Company.
Bandura, A. (1986). Social Foundations of Thought and Action. Englewood Cliffs: Prentice Hall.
Bandura, A., Adams, N.E. & Beyer, J. (1977). Cognitive processes mediating behavioral change. Journal of Personality and Social Psychology, 35(3), 125-139.
Barling, j. & Beattie, R. (1983). Self-efficacy beliefs and sales performance. Journal of Organizational Behavior Management, 5, 41-51.
Compeau, D.R. & Higgins, C.A. (1995). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 19(2), 189-211.
Compeau, D.R., Higgins, C.A., & Huff, S. Social cognitive theory and individual reactions to computing technology: A longitudinal study. MIS Quarterly, 23(2), 145-158.
Gist, M.E. (1987). Self-efficacy: Implications for organizational behavior and human resource management. Academy of Management Review, 12(3), 472-485.
Gist, M.E. & Mitchell, T.R. (1992). Self-efficacy: A theoretical analysis of its determinants and malleability. Academy of Management Review, 17(2), 183-211.
Gist, M.E., Schwoerer, C. & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training. Journal of Applied Psychology, 74(6), 884-891.
Harrison, A.W. & Ranier, K. (1992a). The influence of individual differences on skill in end-user computing. Journal of Management Information Systems, Summer, 9(1), 93-111.
Harrison, A.W. & Ranier, R.K. (1992b). An examination of the factor structures and concurrent validities for the computer attitude scale, the computer anxiety rating scale, and the computer self-efficacy scale. Educational and Psychological Measurement, 52(3), 735-745.
Hill, T., Smith, N.D. & Mann, M.F. (1987). Role of self-efficacy expectations in predicting the decision to use advanced technologies: The case of computers. Journal of Applied Psychology, 72(2), 307-313.
Karsten, R. & Roth, R.M. (1998a). The relationship of computer experience and computer self-efficacy to performance in introductory computer literacy courses. Journal of Research on Computing in Higher Education, 31(1), 14-24.
Karsten, R. & Roth, R. M. (1998b). Computer self-efficacy: A practical indicator of student competency in introductory IS courses. Informing Science, 1(3), 61-68.
Keppel, G. (1991) Design and Analysis: A Researcher's Handbook, Prentice Hall: New Jersey.
Marakas, G.M., Yi, M.Y. & Johnson, R.D. The multilevel and multifaceted character of computer self-efficacy: Toward clarification of the construct and in integrative framework for research. Information Systems Research, 9(2), 126-163.
Murphy, C.A., Coover, D. & Owen, S.V. (1989). Development and validation of the computer self-efficacy scale. Educational and Psychological Measurement, 49, 893-899.
Pedhazur, E.J. (1982). Multiple Regression in Behavioral Research: Explanation and Prediction. Fort Worth: Holt, Rinehart and Winston.
Torkzadeh, G. & Koufteros, X. (1994). Factorial validity of a computer self-efficacy scale and the impact of computer training. Educational and Psychological Measurement, 54(3), 813-821.
Rex Karsten, University of Northern Iowa
Table 1: Study Variables Gender Males Female Total Participants (GENDER s N 101 75 176 % 57.4 42.6 100.0 Age (AGE) Years < 19 19-20 21-22 > 22 Mean Median Mode N 8 103 43 22 20.3 20.0 20 % 4.6 58.6 24.4 12.4 Computer Experience (COMPEX) Years < 2 2-3 4-5 > 5 Mean Median Mode N 26 36 58 56 5.31 4.00 4.0 % 14.8 20.4 32.9 31.9 Prior Computer Courses (COURSES) # Courses 0 1-2 3-4 > 4 Mean Median Mode N 84 43 25 24 1.14 1.00 0 % 47.7 24.4 14.3 13.6 Normal Computer Use (NORMUSE) Frequency Daily Weekly Monthly < Monthly N 49 88 24 15 % 27.8 50.0 13.6 8.6 Computer Self-Efficacy Scores Measure Item Scale Range Mean St. Dev. Compeau & Higgins (CHCSE) 0-10 5.69 1.89 Murphy et al. (MCSE) 1-5 3.53 0.69 Table 2: Study Variable Correlations CHCSE MCSE GENDER AGE COMPEX COURSES NORMUSE CHCSE 1.000 MCSE .741 * 1.000 GENDER .188 * .176 * 1.000 AGE -.020 -.018 .187 * 1.000 COMPEX .111 .152 * .091 .008 1.000 COURSES .318 * .345 * .127 .001 .056 1.000 NORMUSE -.444 * -.470 * .040 -.075 -.173 -.183 * 1.000 Significance level: * p < .05 (GENDER & NORMUSE are categorical variables) Table 3: Multiple Regression Analysis Dependent Variable = CHCSE (Compeau & Higgins) Multiple R .564 R Square .318 Adjusted R Square .290 F = 11.21 Significance of F = .000 * Variables Beta Std. Beta T Sig T AGE -.002 -.079 -1.205 .230 GENDER .680 .191 2.868 .005 * COMPEX .001 .010 .148 .882 COURSES .149 .197 2.980 .003 * D1 (NORMUSE) 2.520 .598 5.222 .000 * D2 1.010 .267 2.250 .026 D3 .348 .063 .659 .511 Significance level: * p < .001 Table 4: Multiple Regression Analysis Dependent Variable = MCSE (Murphy et al.) Multiple R .613 R Square .367 Adjusted R Square .350 Variables Beta Std. Beta T Sig T AGE -.001 -.070 -1.096 .275 GENDER .218 .167 2.601 .010 * COMPEX -.001 .053 .852 .395 COURSES -.005 .195 2.980 .003 * D1 (NORMUSE) .992 .647 5.725 .000 * D2 .316 .229 1.953 .053 D3 .194 .097 1.032 .304 Significance level: * p < .001 Table 5: t-test Analyses for CSE Differences Based on GENDER Measure = CHCSE N Mean St. Dev t-value p Males 75 5.99 1.88 Females 101 5.31 1.88 -2.35 .020 * Measure = MCSE N Mean St. Dev t-value P Males 75 3.64 .68 Females 101 3.38 .69 -2.41 .017 * Significance level: * p < .05 Table 6: NORMUSE Means by Category CHCSE CHCSE Mean NORMUSE N Mean St. Dev. Difference * Daily 49 7.089 1.557 Weekly 88 5.362 1.594 1.727 ** Monthly 24 4.912 1.700 2.177 ** > Monthly 15 4.320 2.324 2.769 ** Overall 176 5.693 1.895 MCSE MCSE Mean NORMUSE Mean St. Dev. Difference * Daily 4.116 .600 Weekly 3.348 .583 .767 ** Monthly 3.293 .458 .823 ** > Monthly 3.026 .719 1.089 ** Overall 3.531 .691 * Mean difference from Daily. Post hoc Bonferroni pairwise comparisons indicated significant mean differences existed between the Daily mean and every other group mean. No significant mean differences were found among the other group means. Significance level: ** p < .05
|Printer friendly Cite/link Email Feedback|
|Publication:||Academy of Educational Leadership Journal|
|Date:||Jan 1, 2000|
|Previous Article:||Technology by immersion: design of Auditor liability assignments using Web-based resources.|
|Next Article:||The data communications course with a dedicated lab: design, implementation, and student assessment.|