Preservice teachers self-assessment using technology: determining what is worthwhile and looking for changes in daily teaching and learning practices.
Schubert (2000) posed a basic curricular question of "What is worth-while?" in a journal dealing with curriculum inquiry. This timeless question is one with which educators constantly struggle. Professionals in the field of technology and teacher education attempt to address this question through careful study and dissemination of findings from the variety of ways in which preservice teachers are prepared to use educational technologies in their teaching and learning. Yet is the work being done resulting in new daily teaching and learning practices by preservice and inservice teachers? Advocates and critics alike are vocal when discussing the integration and effectiveness of computers and other educational technologies in today's curricula (Becker, 2000; Marcinkiewicz, 1993; Cuban, Kirkpatrick, & Peck, 2001). By sharing findings from studies, educational technologists in teacher education can gain broader perspectives of various strategies to determine what is worthwhile for different learners with respect to the integration of technologies into teacher education programs and whether these initiatives are leading to changes in practice. These studies should provide evidence of how educational technologies can enhance the learning of all students in a variety of academic environments and areas as well as how changes in practice are occurring (or not occurring, if that is the case). One way to start is to carefully examine teacher education students' perceptions of how successful they can be in using computers in their daily lives and in teaching and whether these perceptions indicate a change in beliefs or daily practice. Such a study took place at the University of Florida. The findings of the study provided data for significant conversations among the faculty about the course Integrating Technology into the Curriculum with respect to the teacher education program and about what kinds of questions and studies could be asked that would bring a new perspective on how educational technologies enhance teaching and learning.
REVIEW OF LITERATURE
The examination of preservice teachers' perceived use of educational technology and their perceived growth at the conclusion of the course Integrating Technology into the Curriculum was the primary focus of this study. Determining whether preservice teachers could and would change when they have an opportunity to teach was one of the underlying themes. Addressing the willingness to change standard teaching practice is fundamental to answering questions about whether the course content and activities are worth-while. Therefore, a brief look at educational change literature, specifically teacher change, is appropriate to determine realistic expectations. Change is certainly a constant in educational systems and as Pearly Bailey said "We must change in order to survive" (Ramsey, 2001, p. 18). Because of the complexities and messiness of teaching, learning, and educational systems in general, change is not easily understood. Yet, numerous educational change models exist to guide change agents in working with specific components of the change process. For example, Rogers' Diffusion of Innovations (1995) examined the innovation itself, Ely's Conditions of Change (1976, 1990) considered environmental factors that influence change, Fullan and Stiegelbauer's (1991) New Meaning of Educational Change explored the roles of the change agent, and Zaltman and Duncan's (1977) Strategies for Planned Change explored the resistance to change. Two change models that are particularly relevant to the intended adopter (preservice teachers) and the innovation (the use of educational technologies) are Hall and Associates' (1973) Concerns-Based-Adoption Model (CBAM) and the instructional evolution model developed from the Apple Classroom of Tomorrow (ACOT) project.
Concerns-Based-Adoption Model (CBAM)
Hall, Wallace, and Dossett first proposed the CBAM in 1973. It is a model with three diagnostic dimensions that allows the change agent to specifically look at the intended user of the innovation. The three diagnostic dimensions of the CBAM are the Stages of Concern, the Levels of Use, and the Innovation Configuration.
The Stages of Concern dimension allows for the exploration of potential adopters' perceived needs and perceptions about the innovation. Hall and Associates (1973) identified a set of seven specific stages of concerns about an innovation. These stages are awareness, informational, personal, management, consequence, collaboration, and refocusing. These stages deal with the affective part (people's reactions, perceptions, feelings, and attitudes) of the change. "The research studies clearly document that there is a quasi-developmental path to the concerns as a change process unfolds. However, the flow of concerns is not always guaranteed, nor does it always move in one direction" (Hall & Hord, 2001, p. 63).
The Levels of Use (LoU) dimension examines the behaviors of the intended user and provides insight into how the intended user is acting with respect to the innovation. "The LoU framework makes it possible to understand and predict what is likely to occur with people in change. Facilitators who understand and apply the LoU concept and its measures are able to provide appropriate interventions that will be relevant and helpful to those involved, or expected to be involved, in change" (Hall & Hord, 2001, p. 81).
The third dimension of the CBAM is the Innovation Configuration (IC) dimension that explores what the developer had in mind for the innovation and compares that with what is operationalized by the intended user. "The focus in the IC diagnostic dimension is on developing and applying word-picture descriptions of what the use of an innovation can look like" (Hall & Hord, 2001, p. 38). A benefit of using the IC is that it encourages consensus building on what and how the use of the innovation will be for a particular educational system. This is turn provides for a shared vision among the stakeholders in the educational system. By using the three diagnostic dimensions of the CBAM, the change facilitator can study intended users in order to implement effective and appropriate invention strategies.
Apple Classroom of Tomorrow (ACOT)
The Apple Classroom of Tomorrow (ACOT) project began in 1985 with the question "What happens to teaching and learning activities when students and teachers have access to technology whenever they want or need it?" (Fisher, Dwyer, & Yocam, 1996, p. 2). Apple Computer provided computers and training for the teachers while the school districts involved in the project provided staffing and infrastructure to the classrooms. Numerous threads of research emerged from the decade long project and goals continuously evolved. One of the research areas that emerged was the Stages of Instructional Evolution model that describes the patterns of teaching and learning that develop over time. The ACOT model stages are: entry, adoption, adaptation, appropriation, and invention.
At the entry stage, teachers are busy setting up the equipment and dealing with how to developing strategies for students to physically access the technology in the classroom environment. In the adoption stage, teachers show concern about how to integrate the technology into the curriculum. The adaptation stage is where the new technology becomes thoroughly integrated into normal classroom practice. The ACOT researchers found that student productivity was a major theme at this stage. The appropriation stage is described by the ACOT researchers as more of a personal change in attitude than actions. "Appropriation is the point at which an individual comes to understand technology and use it effortlessly as a tool to accomplish real work" (Sandholtz, Ringstaff, & Dwyer, 1997, p. 42). Finally, the invention stage is where teachers create and experiment with new strategies for teaching, learning, and communicating with students and teachers.
The CBAM and ACOT models, focusing specifically on teacher change, influenced ideas about how preservice teachers use educational technologies in the teaching and learning environment. These models guided the data collection, assisted in the interpretation the research findings, and provided ideas for appropriate responses to study findings.
DESCRIPTION OF THE STUDY
The University of Florida participated in a national data collection project conducted at the University of North Texas Institute for the Integration of Technology into Teaching and Learning (http://www.iittl.unt.edu/). As part of this project, preservice teachers from educational institutions from around the nation were surveyed regarding their perceptions about using educational technologies as a learner and preservice teacher. The leaders of this project allowed institutions to analyze data from their own students. Hence, the study was developed to examine University of Florida preservice teachers' initial self-assessments of their use of instructional technology and perceived growth at the conclusion of an introductory course allowing for a micro-level evaluation.
During the fall and spring semester of the 2002-2003 academic year, online surveys were administered to University of Florida students enrolled in sections of EME 4406: Integrating Technology into the Curriculum. These courses are designed for declared early childhood and elementary education majors as well as prospective secondary education majors. There are separate sections of EME 4406 for early childhood, elementary, and secondary students. Because the early childhood course was piloted during the spring 2003 semester, these students did not participate in this study. Elementary and secondary students enrolled in EME 4406 took the surveys twice each semester. The pretest was administered at the beginning of the semester and the posttest was administered at the conclusion of the semester. Students were not required to participate in this study; hence, not all students completed the surveys. In addition, some students took the pretest surveys but selected not to participate in the posttest surveys. Along with the collection of demographic information, the following surveys were administered to these preservice teachers: CBAM Level of Use of Technology (CBAM-LoU), Stages of Adoption of Technology (Stages), Apple Classroom of Tomorrow Instrument (ACOT), Teachers Attitudes Toward Computers (TAC), and the Technology Proficiency Self Assessment (TPSA).
In the next sections of this article, descriptions of the study instruments, the university as a whole, and a description of the study sample are provided.
DESCRIPTION OF INSTRUMENTS
During this study, students responded to items from five instruments developed and validated by researchers in the Institute for the Integration of Technology into Teaching and Learning at the University of North Texas (IITTL). Items from the instruments were compiled into a single online survey housed on the IITTL web server. The time estimated for students to respond to all instrument items was 30-45 minutes. In reality, students reported they completed the survey in less than 30 minutes. The following paragraphs briefly describe the instruments and provide reliability information. Copies of the instruments can be found in the book Instruments for Assessing Educator Progress in Technology Integration (Knezek, Christensen, Miyashita, & Ropp, 2000) located on the IITTL website (http://www.iittl.unt.edu).
The first three instruments used in this study provide general indications of the students' abilities to effectively use and integrate educational technologies into teaching and learning environments. These instruments were the CBAM Level of Use of Technology (LoU), the Stages of Adoption, and the Apple Classroom of Tomorrow (ACOT).
CBAM Level of Use of Technology (CBAM-LoU)
This instrument is a self-report measure describing the behaviors of innovators as they progress through the use of technology. This instrument was adapted from the work by Hall, Loucks, Rutherford, and Newlove (1975) when they created the LoU dimension as one component of the CBAM. As defined by Hall et al (1975), the LoU dimension "describes the behaviors of the innovation user through various stages ... It should be noted that the LoU dimension is targeted toward describing behaviors of innovation users and does not at all focus on attitudinal, motivational, or other affective aspects of the user. The dimension does not attempt to explain causality" (p. 53). This version of CBAM-LoU was created by Griffin and Christensen (1999). The eight levels of use are (0) Non-use, (1) Orientations, (2) Preparation, (3) Mechanical Use, (4A) Routine, (4B) Refinement, (5) Integration, and (6) Renewal.
Because the CBAM-LoU instrument is a single item survey, internal consistency reliability measures cannot be calculated. Therefore, the item was asked twice to ensure there was an acceptable test-retest reliability coefficient.
Stages of Adoption
Stages of Adoption (Christensen, 1997) is a self-assessment instrument of a teacher's level of adoption of technology, based on earlier work by Russell (1995). Russell suggested adults pass through six stages of technology adoption and could begin at any stage and progress at their own pace. Russell's work has many similarities with Rogers (1995) work on diffusion of innovations. In the Stages of Adoption instrument, the six possible stages in which educators rate themselves are (a) Awareness, (b) Learning the process, (c) Understanding and application of the process, (d) Familiarity and confidence, (e) Adaptation to other contexts, and (f) Creative application to new contexts.
Apple Classroom of Tomorrow Instrument (ACOT)
The Apple Classroom of Tomorrow Project was a collaborative research and development effort among public schools, universities, research agencies, and Apple Computer. This longitudinal study "set out to investigate how routine use of technology by teachers and students would affect teaching and learning" (Sandholtz et al., 1997, p. 3). The Apple Classroom of Tomorrow (ACOT) instrument includes five choices from which an educator can select the level he or she understands and uses technology. These levels include (a) Entry, (b) Adoption, (c) Adaptation, (d) Appropriation, and (e) Invention. The Stages of Instructional Evolution Model emerged from the ACOT project and has been used in numerous research studies around the world.
The remaining two instruments, the Teachers' Attitude Toward Computers (TAC) and the Technology Proficiency Self-Assessment Instrument (TPSA) provide more details about the students' perceptions of their abilities to use specific educational technologies in classroom situations.
Teachers' Attitude Toward Computers (TAC)
Christensen and Knezek (1997) developed the Teachers' Attitude Toward Computers instrument as part of a study of the "effects of technology integration education on the attitudes of teachers" (Knezek et al., 2000, p. 19). Students took version 6.1 of the TAC, which has 51 items using a 9-factor structure. The internal consistency reliability coefficients range from .84 (accommodation) to .97 (perception). The nine areas examined are (a) Interest: enjoyment and satisfaction in using computers, (b) Comfort: lack of anxiety; comfortable using technology, (c) Accommodation: acceptance of computers; willingness to learn, (d) E-mail: usefulness of e-mail with students, (e) Concern: fear that computers will have a negative impact on society, (f) Utility: belief that computers are useful for productivity and instruction, (g) Perception: overall feeling toward computers, (h) Absorption: belief that computers are a part of many areas of work and leisure, and (i) Significance: belief that computers are important for student use.
Technology Proficiency Self-Assessment Instrument (TPSA)
The TPSA was developed by Ropp (1999) and is a 20-item measure using 5-point Likert descriptors. The four areas covered in the instrument are e-mail, the World Wide Web (WWW or Web), integrated applications, and integrating technology into teaching. The instrument was found to have a reliability of .94. Alpha coefficients for the subscales range from .78 (e-mail) to .88 (teaching and technology). "Even though the content of the items on the Technology Proficiency Self-Assessment Instrument was tailored to teaching and learning with computers, the TPSA is essentially a measure of self-efficacy. Individuals are asked to rate their confidence in their ability to perform the tasks listed on the instrument" (Knezek et al., 2000, p. 41).
DESCRIPTION OF THE UNIVERSITY
The University of Florida is a public, comprehensive, land grant, research university with a talented and diverse student body. Enrollment for the 2002 fall semester totaled 48,184 with 80% in-state students. Seventy-three percent (73%) of enrolled students are undergraduates, 20% are graduate students, and 7% are in professional degree programs. Over 32% of UF students are minorities, with 9.9% Hispanic, 7.4% African American, and 6.9% Asian, Pacific Islander, or Native American.
DESCRIPTION OF SURVEY SAMPLE
During the 2002-2003 academic year, there were 15 sections of EME 4406 courses offered, seven sections of elementary and eight sections of secondary, with an average of 32 students per section. As in many education courses, 82.9% of students were female and there were more students in the elementary education program (58%) than the secondary program (42%). Table 1 provides specific demographic information about the sample of participants participating in this study.
Overall, the age demographics of this course remained relatively stable across the fall and spring semesters. Student ages ranged from 18-55 across both semesters. The average age in the fall was 22.16 and 21.72 for the spring semester. The mode age of students was 21 years old across the two semesters of the study.
As evidenced by the demographic information on age, this sample could be considered a "traditional" group of college-aged students taking a senior level course. However, one should note that a large percentage of teacher education students at UF come from community college systems. This is particularly true with the Unified Elementary ProTeach population but is also a factor with the secondary students. Demographics indicate that 21.4% of our students come to our teacher education program with educational experiences that can be drastically different than the experience they would receive at UF. Specific information about this issue can be found in Table 2.
On the pretests, students reported that 99.24% (260) have a computer at home and 98.09% (257) have Internet access at home. For the posttests, 99.08% (217) of students reported having a computer at home with 98.63% (216) reporting Internet access. UF's admission policy requires students to have access to a computer at their dwelling place; hence, student access to computers is not a major problem.
In an attempt to gain a deeper understanding of the preservice teachers' perceptions regarding technology survey items related to self-assessments of computer use, computer literacy, the preservice teacher's ability to integrate technology into the curricula, and perceptions of how university faculty and K-12 students use technology in the teaching and learning process were included. A majority of our students (93% and 96% respectively for the pre and posttests) report that on a weekly basis they use the computer more than 2 hours a week. For the pretest, 57% of students used a computer between 4-15 hours on a weekly basis. Results from the posttest show that 67% used a computer for the same time period. Additional information can be found in Table 3. However, student perceptions of the frequency with which faculty and K-12 students use computers are quite different. It is the perception of our students that faculty do not use computers on a consistent basis. A majority of student perceptions (95% and 75% respectively) are that faculty use computers 3 or fewer hours a week. Students also perceived that K-12 students only occasionally use computers (52% and 48% respectively). Tables 4 and 5 provide additional information about student perceptions on these issues.
DATA ANALYSIS INFORMATION
Survey data was received in an Excel file from principal investigators of the Institute for the Integration of Technology into Teaching & Learning (IITTL) project at the University of North Texas. This data was then imported into SPSS. Data in this survey is ordinal because the differences in coding are not represented by equal differences in the numbers assigned to categories (a property of interval data). However, because of the sample size and assumptions of the normality and homogeneity of variances for the respective population, parametric tests on the data using the Student's t distribution and test statistic were run. The difference between a normal distribution and the t distribution was negligible because "as sample size increases, the difference between the normal distribution and the corresponding t distribution decreases. From a practical standpoint, the normal distribution is an adequate approximation of the t distribution when df exceeds 120." (Hinkle, Wiersma, & Jurs, 1994, p. 186) When running statistics for the fall and spring pre and posttests the sample size was 219 with a degree of freedom of 218. The alpha level for all statistical tests was .05. Effect size data is also included to provide another perspective of practical significance. Statistical findings from the pretests and posttests will be presented followed by a discussion of the findings.
PRESENTATION OF PRETESTS AND POSTTESTS FINDINGS
Overall Perceptions of Behaviors
General indications of preservice teachers' abilities to effectively use and integrate educational technologies into the teaching and learning environments were gathered from instruments that provide evidence of each preservice teacher's overall assessment of his or her abilities. This information represents the data gathered in the fall and spring pretest and posttests. Results from the LoU, the Stages of Concern, and the ACOT instruments indicate that at the onset of the course students were at a beginning stage in their understanding, comfort levels, and use of technology in teaching and learning environments. As the semester progressed, students' perceptions of their competence and behaviors increased in these areas. For the three instruments, the difference in pre and posttest means was statistically different at the .05 level with an effect size of above .3, which is often used as the rule of thumb when considering practical significance in educational technology (Bialo & Sivin-Kachala, 1996). Detailed statistical data can be found in the following tables and graphs.
Level of Use (LoU)
Table 6 indicates the difference in means is statistically significant at the .0001 level and the effect size is considered practically significant. Figure 1 shows students' change from the lower levels of the scale (from nonuse to mechanical use) at the pretest to more advanced uses (routine to renewal) at the posttest.
[FIGURE 1 OMITTED]
Stages of Adoption
Because the Stages of Adoption instrument is a one-item instrument, it was administered twice during each testing session and the two scores were averaged. Students selected one of six stages they believed best described their ability to use technology in the classroom.
The difference in means is statistically significant at the .05 level and the effect size indicates practical significance as indicated by the data in Table 7. Figure 2 shows that students' perceptions of their ability to use technology increased throughout the semester. The number of students at the beginning stages decreases and moves toward higher stages in the posttest.
[FIGURE 2 OMITTED]
The Apple Classroom of Tomorrow instrument has 5 categories from which students selected a level that best described their ability to use and understand technology. Table 8 provides the statistical analysis of the data.
The difference in means is statistically significant at the .05 level and the effect size is of practical significance. From examining Figure 3, the pretest and posttest data shows students' perceptions of their ability to use technology, specifically computers, increased during the semester. This supports the evidence found in the LoU and Stages of Adoption instruments.
[FIGURE 3 OMITTED]
Perceived Technological Proficiency in Specific Areas
Two of the instruments, the Teachers' Attitudes Toward Computers (TAC) and the Technology Proficiency Self-Assessment Instrument (TPSA), provide additional details about the preservice teachers' perceptions of their abilities to use educational technologies in the classroom. The TPSA is designed to show whether preservice teachers believe they can perform tasks involving electronic e-mail, using the World Wide Web, integrated computer applications, and integrating technology into teaching. It looks at preservice teachers' self-efficacy of these tasks. The TAC, however, measures the "effects of technology integration education on the attitudes of teachers" (Knezek et al, 2000, p. 19). In other words, the instrument is providing insight into whether our preservice teachers see the use of educational technologies as "worthwhile" in their daily lives and in the teaching and learning process and whether this could lead to new teaching practices.
Overall, survey results from these two instruments document some change in our students' perceptions of their ability to use computers in the classroom. Yet on the TAC, even though there were statistically significant differences in the means in the interest, utility, and absorption sections, the effect size was less than .3 indicating a lack of practical significance. (In educational technology studies, a general rule of thumb for showing practical significance is an effect size of .3 or greater [Bialo & Sivin-Kachala, 1996].) On the TPSA for all four sections the difference between the means was statistically significant yet the effect size for e-mail was well below the .3 level indicating little practical significance. Detailed statistical information along with sample items for the TAC and TPSA follow.
Teachers' Attitudes Toward Computers (TAC)
Using a Likert scale from 1 (strongly disagree) to 5 (strongly agree), teachers' attitudes regarding technology integration in nine areas were measured. The following sections describe each section of the TAC, provide the data table (tables 9-17), and present the statistical findings.
TAC Part 1: Interest. This section provided information on general attitudes toward working with and learning about computers. There are five items in this section of the instrument. A sample item is "I think that working with computers would be enjoyable and stimulating."
The t-test result indicates the difference in the means was significant at the .05 level yet the effect size does not reflect practical significance.
TAC Part 2: Comfort. This section of the instrument measures the anxiety or comfort level preservice teachers have with computers. Students responded to five items. An example of an item in this category is "I get a sinking feeling when I think of trying to use a computer."
The difference in the means was not significant at the .05 level. Overall, students remained comfortable with their ability to computers.
TAC Part 3: Accommodation. This section of the TAC measures whether students feel the computer is a valuable part of their life. Again, students responded to five items. Two sample items are "If I had a computer at my disposal, I would try to get rid of it" and "I see the computer as something I will rarely use in my daily life." It is important to note that because of the negatively worded items, the coding is reversed meaning that the higher numbers actually reflect a lack of negative feelings.
Differences in the means between the pretest and posttests were not significant at the .05 level. As can be seen, most preservice teachers had a positive perception of the computer and this remained stable throughout the semester.
TAC Part 4: E-mail. The e-mail portion of the TAC measures students' perceptions of how effective e-mail is as a tool for teaching and learning. This five-item portion of the survey has items similar to "The use of e-mail makes the student feel more involved" and "The use of e-mail helps provide a better learning experience."
Again, the difference in means for this portion of the TAC was not significant at the .05 level. Preservice teachers remained stable on their perception of the use of e-mail for students in K-12 classrooms. It is interesting to note that many students stayed close to the choice of undecided (coded as a 3).
TAC Part 5: Concern. The Concern section of the TAC measures whether computers are harmful or beneficial to society. This section of the survey has eight items. Examples of items are "Computers are changing the world too rapidly" and "Computers isolate people by inhibiting normal social interactions among users."
The difference between the pretests and posttests means was not significant at the .05 level. Again, preservice teachers' perceptions remained relatively constant and again, many chose undecided (coded as a 3) as their response for these items.
TAC Part 6: Utility. The Utility section measures students' perceptions of computers in the teaching and learning environment. This eight-item section of the survey has items similar to "Computers could increase my productivity" and "If there was a computer in my classroom it would help me to be a better teacher."
The difference between means on the pretests and posttests was statistically significant at the .05 level but the low effect size indicates little practical significance. Overall, students indicated they felt that computers were a positive influence on their own learning, productivity, and benefited teaching and learning environments.
TAC Part 7: Perception. This section has five semantic differential pairs on a 7-point scale to indicate how the preservice teachers' feel about computers. Examples of the adjective pairs used were "Unpleasant to Pleasant" and "Unlikable and Likeable."
Preservice teachers' overall perception of the computer remained stable and the difference between the means was not significant at the .05 level.
TAC Part 8: Absorption. The Absorption component of the TAC measures how deeply integral the computer is to the preservice students. This five-item section contains items such as "I like to talk to others about computers" and "When there is a problem with a computer that I can't immediately solve, I stick with it until I have the answer."
Although the difference between the pretests and posttests means is small, it was significant at the .05 level. However, the low effect size indicates this is not a practically significant result. Students continued to disagree with these statements although some changed from disagree to the undecided category.
TAC Part 9: Significance. The Significance construct of the TAC measures preservice teachers' perceptions about how valuable the computer is for students learning experiences and life.--This final section has five items. Examples of items in this section are "It is important for students to learn about computers in order to be informed citizens," "Students should understand the role computers play in society," and "Having computer skills helps one get better jobs."
The difference in pretests and posttests means was not significant at the .05 level. Most students remained constant in the agreeing (coded as 4) with the items.
Technology Proficiency Self-Assessment (TPSA)
The TPSA allowed us to measure the UF preservice teachers' self-efficacy of their perceived technology skills. In this version of the survey, four of the areas from Ropp's (1999) measurement scales (with five items each) were used. These areas were: e-mail, the Web, integrated applications, and teaching with technology. It is important to note that these items measure the preservice teachers' confidence in being able to perform certain tasks using technology. The difference in the means for all four areas was statistically significant at the .05 level and all areas with the exception of e-mail had effect sizes indicating practical significance. For each area of the TPSA the following is provided: a sample item, data table (Tables 18-21), and a statistical finding.
TPSA E-mail. Sample item: I feel confident I could send a document as an attachment to an e-mail message.
The difference between the means was significant at the .05 level but the effect size does not indicate a practical significance.
TPSA Web. Sample item: I feel confident I could find primary sources of information on the Internet that I can use in my teaching.
The difference between the means was significant at the .05 level and the effect size indicates practical significance.
TPSA Integrated Applications. Sample Item: I feel confident I could create a database of information about important authors in a subject matter field.
The difference between the means was significant at the .05 level and the effect size suggests practical significance.
TPSA Teaching with Technology. Sample Item: I feel confident I could create a lesson or unit that incorporates subject matter software as an integral part.
The difference between the means was significant at the .05 level and the effect size indicates practical significance.
DISCUSSION OF FINDINGS
The discussion of the findings from this survey data will be divided into two areas: student changes from the course with respect to the use of technology in the teaching and learning process and broader technology in teacher education issues. The data from this study provides both positive and negative findings and implications for teacher educators to consider. The thoughts and ideas presented in this article are not to highlight or fault the specific course or teacher education program but to spark discussion among all educational technology and teacher education faculty in areas to consider when changing courses and programs.
Student Changes During the Course
The instruments used in this study allowed us to gain some insight of how students' perceptions, behaviors, and self-efficacy with respect to using educational technologies in teaching and learning changed during the scope of the EME 4406 course. As reported earlier in this manuscript, averages typically increased in each section measured. However, careful study of these increases must occur to better understand what these changes really mean.
When looking at overall perceptions of their abilities to use educational technologies in teaching and learning, our students' comfort with using technology slightly increased, as did the amount of time spent using a computer on a weekly basis. This is certainly not surprising due to the nature of the course and the fact that most students were at the beginning stages of integrating computers into their daily lives. Working in class and independently, students participated in numerous learning activities that required them to use a computer. Students' comfort with using a computer would increase. According to the Level of Use instrument (see Figure 1), student behavior indicators changed from the mechanical use level (level 3) to the beginning stages of refinement use (level 4B). Hall and Hord (2001) described this stage as one where students are beginning to wonder how well technology can benefit them and their students. This is certainly a positive stage and we need to continue to support and scaffold our preservice teachers' attempts to refine the use of computers in the classroom. However, it is interesting to note that the data also indicates many of our students do not appear ready to use educational technologies in different contexts. These students are still at the routine level (level 4A). As categorized by Hall and Hord, routine users have moved from using the innovation in an inefficient manner to a point where they believe their use of educational technologies, and specifically computers for this survey, has stabilized and they feel as if they have a consistent way of using computers. At the routine level, users do not plan to make any adaptations or changes to using computers. They see no need to change if things are working fine the way they are or to expand the uses of the computer to new situations. The findings from the LoU instrument are mirrored in the Stages of Adoption and ACOT instruments. Overall, EME 4406 students remained at stage 4 (familiarity and confidence) on the Stages of Adoption instrument (see Figure 2). The mean increased from the pretest to posttest; however, the overall change of students was not to stage 5 (adaptation to other contexts). On the ACOT instrument, the mean score remained at level 3 (adaptation) although the mean score did increase (see Figure 3). According to Sandholtz et al. (1997), teachers at the adaptation stage use technology to manage their classroom and enhance traditional classroom activities. Findings indicate many of our students perceived their current use of technology for productivity purposes as acceptable and felt no need to consider other uses for computers at the time. This perception remained constant throughout the semester and matched the findings from the TAC Utility section that will be discussed later in the article.
These three instruments indicate that many students are comfortable with their ability to use computers but they are not looking to see how computers and various applications can be used in ways other than methods to which they were exposed in class. This finding meshes with Ertmer's (1999) premise that teachers do not readily assimilate educational technologies and that change must occur from multiple perspectives (i.e., personal, organizational, and pedagogical). According to Ertmer, first order changes occur when teachers more effectively use an innovation; however, teachers' underlying beliefs are not changed. Second order changes are made when teachers' underlying beliefs are changed. Results of this study suggest positive strides are being made related to first order changes. The percentages for each incremental stage on the three instruments do increase and students are learning to think more about how computers and educational technologies can enhance the teaching and learning process. Our preservice students are at the beginning stages of learning to use different educational technologies in their own learning process and are comfortable enough to begin experimenting with these technologies in different contexts but they are currently choosing not to do so. We must challenge our students and provide experiences that will help them progress to more advanced stages in technology integration throughout their teacher education program.
Data from the TAC also support the findings from the general perception instruments. Student averages changed from undecided to agree in the areas of interest (enjoyment and satisfaction using computers) and comfort (comfort using computers with an overall lack of anxiety). Data analysis revealed that the difference in the means between pretest and posttest for the interest component was significant at the .05 level but not for the comfort component of the instrument. This data shows us that the affective element of using computers is improving but is still an area where students need support and encouragement. Again, because these students are beginning to learn about educational technologies as tools for their own learning and for the classroom, we must not neglect the anxiety that this newness causes.
Students are confident in using educational technologies and/or instructional strategies they have repeatedly practiced. The TPSA instrument supports this premise. In all four components of this self-efficacy instrument, the differences in student pretest and posttest means were statistically significant. Students were growing increasingly confident in their ability to perform tasks in the areas of using e-mail, the Web, integrated applications, and in using technology in their teaching. Findings from the TAC and TPSA support the findings of the general proficiency instruments of students' overall comfort with computers. Again, this is normal and to be expected. The more one repeats a task the more confident one is in his or her ability to use the task in actual practice. This is more evidence of first order changes according to Ertmer's (1999) premise.
Student pretest and posttest means were not statistically different and remained at the same level on the TAC in the areas of accommodation (acceptance of computers; willingness to learn), concern (fear that computers will have a negative impact on society), perception (overall feeling toward computers), and significance (belief that computers are important for student use). This tells us students were willing to learn about computers (agree--coded as 4), had no strong perception of whether the computer was harmful to society (undecided--coded as 3), strongly agreed with positive adjectives toward computers (strongly agree--coded as 5), and agreed that computers could help students in the future (agree--coded as 4). To summarize the findings of these "non-change" areas, it appears students believe that computers are "okay" and not harmful to society but are not necessarily an important factor in the educational process. This also corresponds with their perceptions that K-12 students seldom use technology in the classroom (as reported in the demographic section) and the analysis of the data suggests this might be an acceptable fact/perception to these preservice teachers. Our students appear to be cognizant of the fact that computers could help learners but as beginning computer users, they are not convinced that introducing this "new" element into their daily lives and future teaching is necessary. This supports Ertmer's (1999) argument that producing second order change requires challenging an educator's belief systems and daily practice. This finding is also supported by practical experience and research; in addition, it is an area in which educational technologists and teacher educators constantly struggle. When some element in the teaching environment is new, we tend to resort back to those things with which we are comfortable. We teach the way we were taught (Lortie, 1975).
This lack of readiness to change underlying beliefs of how educational technologies can enhance the teaching and learning process is further supported by looking at the survey data from the TAC sections on absorption (belief that computers are part of many areas of work and leisure) and e-mail (effectiveness of e-mail with student learning). Students disagreed in the pretest and posttest that computers were an integral part of their lives as preservice teachers. In addition, students indicated they were consistently undecided on the benefit of e-mail in the process of student learning. Our preservice students can "talk" about the benefits of using technology in the teaching and learning process but when all the realities of teaching come to them, prior experiences and the time and effort required to infuse educational technologies into their lessons causes them to reconsider whether it is worth it. They are not ready to change practice. Yet, the literature (Lortie, 1975) states that this is what happens to all teachers. When a new instructional strategy or tool is encountered, it often takes significant time to learn to use the tool effectively. Hence, educators question whether this time and effort is worth the returns in terms of student learning and achievement. The question of "Couldn't we do this the "old" way?" always seems to arise. Changing underlying beliefs about the use of computers in the educational process for teachers and students is a difficult thing.
However, although the students are unsure of the benefit to computers in their lives and overall teaching and learning processes, the results of the utility component of the TAC indicated that students believed computers could be helpful primarily in teacher productivity activities. For example, our students saw the value of using their gradebook spreadsheet to generate progress reports. This supports the findings from the ACOT instrument where our students remained at the adaptation level. We are hopeful that as our preservice students continue to use educational technologies to assist them in preparation, they will begin to consider extending the benefits of using educational technologies for productivity and learning to their own students. Changing practice requires significant work on a variety of levels (Ertmer, 1999) and productivity advantages can be one of those perspectives.
Another interesting element to considered is that survey data indicates students do not perceive faculty as using computers and other educational technologies in their teaching and learning activities. Although students did appear to become more cognizant of different computer applications being used in their learning activities at UF, the differences were not drastic. It should also be noted that educational technologies were heavily modeled in EME 4406 so this leads one to consider that with such a slight increase in the means there might be little exposure to educational technologies in many of the courses our students take. Granted, this data only reflects the student perceptions of the use of educational technologies being used in their courses but it does raise more questions than answers. In addition, although this article is focusing on the data gained through the surveys, in conversations with students, it seems the most frequent uses of educational technology in many of their education courses are the use of PowerPoint lectures and posting course information (syllabus, readings, assignments) to the Web. It is critical that preservice teachers see more modeling of educational technologies in their courses, the learning activities in which they participate, and the products/assignments they are assigned. This leads me to question whether our teacher education faculty have considered their own beliefs about the use of educational technologies in the teaching and learning process. How can educational technology teacher educators better facilitate this exploration and reflection with other teacher educators in a positive and supportive manner?
Broader Technology in Teacher Education Issues
As I examine the data from this study and reflect on ways in which it may be used to improve our teacher education program, areas of growth and progress can be documented. Our students are overcoming attitudinal hurdles of using computers. Yet there is a significant learning curve for learning to use educational technologies; hence, students often do not see the "return on investment" when trying to integrate educational technologies into their lessons and are not ready to make a second order change of their beliefs. Our students use technology in significant ways for their instructional lessons but only when required to do so. Measuring preservice students self-efficacy toward computers and educational technology related tasks in "traditional areas" (i.e., word processing, e-mail, etc.) provides interesting facts about our students. Yet, now the task is to help students move beyond this point to more advanced stages that will lead to changes in daily teacher practice.
What kinds of studies and instructional activities can I develop and use that could help our teacher education faculty, including myself, enhance the learning experience of preservice teachers and K-12 students? How effective are the learning activities I provide in terms of influencing changes in daily practice? What studies can I design to provide helpful data (whether positive or negative) on the use of educational technologies in the complex teaching and learning environment? I am challenged to begin asking my students and myself more complex questions that might confront ideas and beliefs with respect to the use of educational technologies. I could begin by asking questions such as: Does the use of webquests produce greater increases in learning or gain scores with one group of students (i.e., learning disabled students) than with other students? What differences are there in learning when students use spreadsheets and graphs to analyze census data as opposed to more traditional methods? What types of students achieve the most gain in academic scores when students create digital video projects? When I tell my students I do not know these answers and we discuss the complexities surrounding these questions and how we can find evidence of the answers, this could challenge beliefs and ideas about educational technologies. Quite frankly, most of our preservice teachers, specifically those in the secondary minor courses, still have the mindset of being consumers of education instead of producers of education. Having students take the perspective of a teacher action researcher will greatly expand their ideas. I am of the opinion that we must help students move toward more advanced stages of integration and to investigate their beliefs about teaching and learning from a variety of perspectives. I can do this by thinking aloud with my students about integration strategies as well as posing questions such as those previously listed. We must help our preservice teachers grow in their use of educational technologies to a point where they are considering and creating ways to use educational technologies in different teaching and learning contexts. Although this will not happen within the scope of a single course, it is important that we attempt to ensure it happens within students' teacher education programs.
When talking with educational technology faculty throughout the nation, many share the viewpoint that we need to begin asking different questions and documenting how educational technologies can enhance teaching and learning processes and how changes in practice occur. I look forward to continued collaboration with my educational technology and teacher education colleagues as we expand our thinking in working with preservice and inservice teachers to use educational technologies to enrich the learning and educational experience for all learners. In today's current political climate, it is becoming increasingly important that we have high quality quantitative and qualitative data to support the influence of educational technologies on student achievement as measured by standardized tests and other assessment measures. Our field of technology in teacher education has produced a great deal of outstanding research that assists us in this endeavor; yet, I do believe we must provide rich data addressing more of the complexities of teaching in different academic areas and with all types of learners.
This survey of EME 4406 preservice teachers at the University of Florida, which studied their initial self-assessments of their abilities and behaviors in using technology and their perceived growth, has led me to several conclusions. First, our teacher education program is doing an acceptable job in dealing with the affective element of using computers. Our students continue to grow in comfort in using different educational technologies in their teacher education program. Students also view computers in a positive light. Second, although our students are confident in their abilities to use educational technologies such as e-mail, the Internet, integrated applications, and can integrate this into their teaching, for the most part, this confidence does not move into practice. Our students can "talk the talk" about how computers can enhance teaching and learning, but at this point the "talk" does not necessarily lead to a change in practice. Our students' responses indicate they do not think the integration of educational technologies into the curriculum is "worth it." Students in our teacher education program have not moved into that second order of change with respect to integrating technology into teaching and learning processes. Finally, the results of this study challenge teacher educators to ponder two major areas. The first area is whether teacher education faculty have considered their own beliefs about the use of educational technologies in the teaching and learning process. This leads to questions such as how are our beliefs and attitudes about the use of educational technologies in teaching and learning transmitted to our students in the instructional decisions and actions we take? The second area is what different kinds of research studies and questions would allow us to gain new evidence of the effects of educational technologies in student learning and on achievement measures? There is much work to be done and the potential for great things from our field challenges us to do more.
Becker, H. (2000). Findings from the teaching, learning, and computing survey: Is Larry Cuban right? Education Policy Analysis Archives, 8(51). Retrieved August 15, 2005 from http://epaa.asu.edu/epaa
Bialo, E., & Sivin-Kachala, J. (1996). The effectiveness of technology in schools: A summary of recent research. School Library Media Quarterly, 25(1), 51-57.
Christensen, R. (1997). Effect of technology integration education on the attitudes of teachers and their students. Doctoral dissertation, University of North Texas. Retrieved August 15, 2005 from http://courseweb.tac.unt.edu/rhondac
Christensen, R., & Knezek, G. (1997). Internal consistency reliabilities for 14 computer attitude scales. Teacher and Teacher Education Annual 1997. Charlottesville, VA: Association for the Advancement of Computing in Education.
Cuban, L, Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox. American Educational Research Journal, 38, 813-834.
Ely, D. (1976). Creating conditions for change. In S. Faibisoff & G. Bonn (Eds.). Changing times: Changing libraries (pp. 150-162). Champaign, IL: University of Illinois Graduate School of Library Sciences. (ED183139)
Ely, D. (1990). Conditions that facilitate the implementation of educational technology innovations. Journal of Research on Computing in Education, 23(2), 298-305.
Ertmer, P.A. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development, 47(4), 47-61.
Fisher, C., Dwyer, D., & Yocam, K. (Eds.). (1996). Education & technology: Reflections on computing in classrooms. San Francisco: Jossey-Bass.
Fullan, M., & Stiegelbauer, S. (1991). The new meaning of educational change. New York: Teachers College Press.
Griffin, D., & Christensen, R. (1999). Concerns-based adoption model (CBAM) levels of use of an innovation (CBAM-LOU). Denton, TX: Institute for the Integration of Technology into Teaching and Learning.
Hall, G., & Hord, S. (2001). Implementing change: Patterns, principles, and potholes. Boston: Allyn and Bacon.
Hall, G., Loucks, S., Rutherford, W., & Newlove, B. (1975). Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of Teacher Education, 26(1), 52-56.
Hall, G., Wallace, R., & Dossett, W. (1973). A developmental conception of the adoption process within educational institutions (Report No. 3006). Austin, TX: The University of Texas at Austin, Research and Development Center for Teacher Education.
Hinkle, D., Wiersma, W., & Jurs, S. (1994). Applied statistics for the behavioral sciences (3rd ed.). Boston, MA: Houghton Mifflin Co.
Lortie, D. (1975). School teacher: A sociological study. Chicago, IL: University of Chicago Press.
Marcinkiewicz, H. (1993). Computers and teachers: Factors influencing computer use in the classroom. Journal of Research on Computing in Education, 26, 220-237.
Knezek, G., Christensen, R., Miyashita, K., & Ropp, M. (2000). Instruments for assessing eduator progress in technology integration. Denton, TX: University of North Texas. Retrieved August 15, 2005 from http://www.iittl.unt.edu
Ramsey, R. (2001). Well said, well spoken. Thousand Oaks, CA: Corwin Press.
Rogers, E. (1995). Diffusion of innovations (4th ed.). New York: The Free Press
Ropp, M. (1999). Exploring individual characteristics associated with learning to use computers in preservice teacher preparation. Journal of Research on Computing in Education, 31, 402-424.
Russell, A. (1995). Stages in learning new technology. Computers in Education, 25(4), 173-178.
Sandholtz, J., Ringstaff, C., & Dwyer, D. (1997). Teaching with technology: Creating student-centered classrooms. New York: Teachers College Press.
Schubert, W. (2000). Untheming curriculum inquiry: A basis for expansion. Journal of Critical Inquiry into Curriculum and Instruction, 2(2), 4-6.
Zaltman, G., & Duncan, R. (1977). Strategies for planned change. New York: John Wiley and Sons.
University of Florida
Gainesville, FL USA
Table 1 General Demographics of Participants in 2002-2003 EME 4406 Sections Pretests Posttests Fall & Spring Fall & Spring Elementary majors 137 144 Secondary majors 125 75 Males 49 33 Females 213 186 N 262 219 Table 2 Highest Degree Obtained Pretest Posttest High School 192 167 BA/BS 11 6 MA/MS 0 1 Other 58 45 N 262 219 Table 3 How Frequently, in Hours per Week, Do You Use a Computer (Including the Internet) at Home? Pretests Posttests Hour Category N N 0 hours 2 1 1 hour 16 7 2-3 hours 58 28 4-7 hours 82 67 8-15 hours 68 77 16-31 hours 21 28 more than 31 hours 15 11 Table 4 How Many Hours per Week Do Your Professors Use Computers for Learning Activities? Pretests Posttests Hour Category N N 0 hours 84 6 1 hour 88 42 2-3 hours 77 116 4-7 hours 10 46 8-15 hours 2 7 16-31 hours 0 2 more than 31 hours 1 1 Table 5 How Frequently Do You Think K-12 Students Use Computers for Learning Activities in School? Pretests Posttests Frequency N N Never 4 4 Occasionally 136 90 Weekly 103 82 Daily 19 13 Table 6 Level of Use (LoU) Results Pretests Posttests (t-stat, df, alpha) Mean 3.86 5.46 t=(-10.13, 271, .0001) SD 1.54 1.41 N 261 219 Effect Size .4764 Table 7 Stage of Adoption Results Pretests Posttests (t-stat, df, alpha) Mean 4.02 4.92 T=(-8.2575, 216, .0001) SD 1.18 1.06 N 261 218 Effect Size .37 Table 8 ACOT Results Pretests Posttests (t-stat, df, alpha) Mean 3.15 3.72 T=(-7.258, 281, .0001) SD 0.98 0.70 N 262 219 Effect Size .3173 Table 9 Data from TAC Part 1 (Interest) Pretests Posttests (t-stat, df, alpha) Mean 3.85 4.16 T=(-3.62, 218, .0001) SD .82 .56 N 262 219 Effect Size .2155 Table 10 Data from TAC Part 2 (Comfort) Pretests Posttests (t-stat, df, alpha) Mean 3.84 4.04 T=(-1.912, 281, .057) SD .94 .77 N 262 219 Effect Size .1156 Table 11 Data from TAC Part 3 (Accommodation) Pretests Posttests (t-stat, df, alpha) Mean 4.70 4.71 T=(-.06, 218, .952) SD .44 .52 N 262 219 Effect Size .010 Table 12 Data from TAC Part 4 (E-mail) Pretests Posttests (t-stat, df, alpha) Mean 3.75 3.78 T=(.250, 218, .803) SD .75 .86 N 262 219 Effect Size .018 Table 13 Data from TAC Part 5 (Concern) Pretests Posttests (t-stat, df, alpha) Mean 3.58 3.61 T=(-.008, 218, .993) SD .73 .80 N 262 219 Effect Size .019 Table 14 Data from TAC Part 6 (Utility) Pretests Posttests (t-stat, df, alpha) Mean 4.02 4.17 T=(-2.256, 218, .025) SD .59 .55 N 262 219 Effect Size .1303 Table 15 Data for TAC Part 7 (Perception) Pretests Posttests (t-stat, df, alpha) Mean 5.32 5.56 T=(-1.852, 218, .065) SD 1.18 1.25 N 262 219 Effect Size .098 Table 16 Data for TAC Part 8 (Absorption) Pretests Posttests (t-stat, df, alpha) Mean 2.80 2.98 T=(-2.304, 218, .022) SD .88 .92 N 262 219 Effect Size .09 Table 17 Data for TAC Part 9 (Significance) Pretests Posttests (t-stat, df, alpha) Mean 4.20 4.22 T=(.097, 218, .923) SD .58 .64 N 262 219 Effect Size .016 Table 18 TPSA E-mail Results Pretests Posttests (t-stat, df, alpha) Mean 4.52 4.67 T=(-3.164, 218, .002) SD .54 .43 N 262 219 Effect Size .1518 Table 19 TPSA Web Results Pretests Posttests (t-stat, df, alpha) Mean 4.26 4.74 T=-9.075, 218, .0001) SD .64 .41 N 262 219 Effect Size .4077 Table 20 TPSA Integrated Applications Results Pretests Posttests (t-stat, df, alpha) Mean 3.49 4.46 T=(-12.001, 218, .0001) SD 1.00 .58 N 262 219 Effect Size .51 Table 21 TPSA Teaching with Technology Results Pretests Posttests (t-stat, df, alpha) Mean 3.62 4.34 T=(-9.954, 218, .0001) SD .83 .59 N 262 219 Effect Size .4471
|Printer friendly Cite/link Email Feedback|
|Publication:||Journal of Technology and Teacher Education|
|Date:||Mar 22, 2006|
|Previous Article:||The use of technology in portfolio assessment of teacher education candidates.|
|Next Article:||Technology skills as a criterion in teacher evaluation.|